Jan 31 07:36:06 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 07:36:06 crc restorecon[4743]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:06 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 07:36:07 crc restorecon[4743]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 07:36:08 crc kubenswrapper[4826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:36:08 crc kubenswrapper[4826]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 07:36:08 crc kubenswrapper[4826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:36:08 crc kubenswrapper[4826]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:36:08 crc kubenswrapper[4826]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 07:36:08 crc kubenswrapper[4826]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.548461 4826 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.553945 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554027 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554038 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554047 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554060 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554070 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554081 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554092 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554102 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554111 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554121 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554131 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554141 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554151 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554161 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554170 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554180 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554190 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554208 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554217 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554227 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554240 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554251 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554266 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554278 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554292 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554304 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554315 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554330 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554343 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554354 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554365 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554375 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554386 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554395 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554405 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554415 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554426 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554435 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554443 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554450 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554458 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554466 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554474 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554481 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554489 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554497 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554505 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554513 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554521 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554529 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554537 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554545 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554554 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554561 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554570 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554578 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554586 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554594 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554602 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554609 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554617 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554624 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554635 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554645 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554654 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554664 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554681 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554691 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554700 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.554708 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555716 4826 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555737 4826 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555767 4826 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555779 4826 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555790 4826 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555799 4826 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555811 4826 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555821 4826 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555831 4826 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555840 4826 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555850 4826 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555859 4826 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555868 4826 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555878 4826 flags.go:64] FLAG: --cgroup-root="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555887 4826 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555896 4826 flags.go:64] FLAG: --client-ca-file="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555905 4826 flags.go:64] FLAG: --cloud-config="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555913 4826 flags.go:64] FLAG: --cloud-provider="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555924 4826 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555940 4826 flags.go:64] FLAG: --cluster-domain="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555949 4826 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.555959 4826 flags.go:64] FLAG: --config-dir="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556004 4826 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556014 4826 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556026 4826 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556035 4826 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556044 4826 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556053 4826 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556062 4826 flags.go:64] FLAG: --contention-profiling="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556071 4826 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556084 4826 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556095 4826 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556121 4826 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556137 4826 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556148 4826 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556161 4826 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556173 4826 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556186 4826 flags.go:64] FLAG: --enable-server="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556195 4826 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556214 4826 flags.go:64] FLAG: --event-burst="100" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556224 4826 flags.go:64] FLAG: --event-qps="50" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556234 4826 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556245 4826 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556256 4826 flags.go:64] FLAG: --eviction-hard="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556269 4826 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556280 4826 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556291 4826 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556303 4826 flags.go:64] FLAG: --eviction-soft="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556314 4826 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556325 4826 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556336 4826 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556349 4826 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556360 4826 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556371 4826 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556381 4826 flags.go:64] FLAG: --feature-gates="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556394 4826 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556405 4826 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556417 4826 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556428 4826 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556437 4826 flags.go:64] FLAG: --healthz-port="10248" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556446 4826 flags.go:64] FLAG: --help="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556455 4826 flags.go:64] FLAG: --hostname-override="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556464 4826 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556474 4826 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556483 4826 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556491 4826 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556500 4826 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556509 4826 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556521 4826 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556530 4826 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556539 4826 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556548 4826 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556558 4826 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556566 4826 flags.go:64] FLAG: --kube-reserved="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556575 4826 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556584 4826 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556594 4826 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556603 4826 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556611 4826 flags.go:64] FLAG: --lock-file="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556624 4826 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556637 4826 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556648 4826 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556662 4826 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556672 4826 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556681 4826 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556690 4826 flags.go:64] FLAG: --logging-format="text" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556699 4826 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556709 4826 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556718 4826 flags.go:64] FLAG: --manifest-url="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556726 4826 flags.go:64] FLAG: --manifest-url-header="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556738 4826 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556747 4826 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556759 4826 flags.go:64] FLAG: --max-pods="110" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556768 4826 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556777 4826 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556786 4826 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556795 4826 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556838 4826 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556847 4826 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556856 4826 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556882 4826 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556891 4826 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556900 4826 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556910 4826 flags.go:64] FLAG: --pod-cidr="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556920 4826 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556933 4826 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556942 4826 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556953 4826 flags.go:64] FLAG: --pods-per-core="0" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556961 4826 flags.go:64] FLAG: --port="10250" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.556996 4826 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.557196 4826 flags.go:64] FLAG: --provider-id="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558080 4826 flags.go:64] FLAG: --qos-reserved="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558100 4826 flags.go:64] FLAG: --read-only-port="10255" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558110 4826 flags.go:64] FLAG: --register-node="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558120 4826 flags.go:64] FLAG: --register-schedulable="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558129 4826 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558144 4826 flags.go:64] FLAG: --registry-burst="10" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558153 4826 flags.go:64] FLAG: --registry-qps="5" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558162 4826 flags.go:64] FLAG: --reserved-cpus="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558171 4826 flags.go:64] FLAG: --reserved-memory="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558182 4826 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558191 4826 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558201 4826 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558210 4826 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558219 4826 flags.go:64] FLAG: --runonce="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558227 4826 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558237 4826 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558246 4826 flags.go:64] FLAG: --seccomp-default="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558255 4826 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558264 4826 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558274 4826 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558283 4826 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558292 4826 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558302 4826 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558311 4826 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558320 4826 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558331 4826 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558351 4826 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558363 4826 flags.go:64] FLAG: --system-cgroups="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558376 4826 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558397 4826 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558407 4826 flags.go:64] FLAG: --tls-cert-file="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558418 4826 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558440 4826 flags.go:64] FLAG: --tls-min-version="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558451 4826 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558462 4826 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558474 4826 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558486 4826 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558496 4826 flags.go:64] FLAG: --v="2" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558508 4826 flags.go:64] FLAG: --version="false" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558519 4826 flags.go:64] FLAG: --vmodule="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558530 4826 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.558539 4826 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558752 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558765 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558776 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558787 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558796 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558805 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558814 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558822 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558830 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558841 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558851 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558861 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558870 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558880 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558890 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558899 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558912 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558922 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558931 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558941 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558951 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.558961 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559008 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559020 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559030 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559040 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559049 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559059 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559068 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559078 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559088 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559097 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559106 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559116 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559126 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559136 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559146 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559156 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559165 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559175 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559184 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559194 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559204 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559214 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559225 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559235 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559243 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559252 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559264 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559274 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559283 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559293 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559302 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559312 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559321 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559330 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559341 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559352 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559361 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559377 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559387 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559395 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559403 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559411 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559422 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559430 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559438 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559446 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559453 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559462 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.559469 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.559499 4826 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.573045 4826 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.573093 4826 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573233 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573245 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573256 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573265 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573274 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573282 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573291 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573303 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573313 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573322 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573331 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573341 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573353 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573362 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573371 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573379 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573387 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573395 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573404 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573412 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573420 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573428 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573436 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573445 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573452 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573460 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573470 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573480 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573489 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573498 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573507 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573515 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573523 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573531 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573542 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573551 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573559 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573570 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573580 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573588 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573596 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573603 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573611 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573619 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573627 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573634 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573642 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573649 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573657 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573665 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573673 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573680 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573688 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573696 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573703 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573712 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573720 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573728 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573735 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573743 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573751 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573759 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573767 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573774 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573782 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573790 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573800 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573808 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573816 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573823 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.573832 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.573846 4826 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574110 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574122 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574131 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574139 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574147 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574155 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574163 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574170 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574178 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574186 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574193 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574201 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574209 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574217 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574224 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574232 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574240 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574247 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574255 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574263 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574271 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574279 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574287 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574294 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574303 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574310 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574318 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574326 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574334 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574343 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574352 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574360 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574368 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574376 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574384 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574395 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574405 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574414 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574422 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574431 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574439 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574446 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574454 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574462 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574469 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574478 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574489 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574500 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574509 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574518 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574526 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574534 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574543 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574551 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574558 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574567 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574575 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574583 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574590 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574598 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574607 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574614 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574622 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574630 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574637 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574645 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574653 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574661 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574668 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574676 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.574687 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.574700 4826 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.574958 4826 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.583795 4826 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.583927 4826 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.586249 4826 server.go:997] "Starting client certificate rotation" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.586279 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.587675 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 09:59:36.293400561 +0000 UTC Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.587779 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.616203 4826 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.620686 4826 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.621324 4826 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.638634 4826 log.go:25] "Validated CRI v1 runtime API" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.682804 4826 log.go:25] "Validated CRI v1 image API" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.685018 4826 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.689932 4826 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-07-31-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.689987 4826 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.717450 4826 manager.go:217] Machine: {Timestamp:2026-01-31 07:36:08.711943391 +0000 UTC m=+0.565829830 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5dc4a50b-5ade-4352-ba95-1ca9483f1f64 BootID:410cfc83-ff74-4210-b833-727c4d6db644 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:74:26:65 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:74:26:65 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:99:2a:a3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:41:ae:56 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:93:87:03 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:52:03:00 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:d0:12:1f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:9e:40:38:dc:bf Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:32:4a:09:19:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.717876 4826 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.718216 4826 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.719893 4826 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.720244 4826 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.720303 4826 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.720645 4826 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.720664 4826 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.721507 4826 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.721562 4826 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.722388 4826 state_mem.go:36] "Initialized new in-memory state store" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.722904 4826 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.726270 4826 kubelet.go:418] "Attempting to sync node with API server" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.726305 4826 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.726395 4826 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.726419 4826 kubelet.go:324] "Adding apiserver pod source" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.726438 4826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.730892 4826 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.731489 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.731623 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.731705 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.731648 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.731805 4826 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.733342 4826 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.734913 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.734944 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.734954 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.734963 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.734999 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735009 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735017 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735031 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735041 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735050 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735062 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735070 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.735985 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.736495 4826 server.go:1280] "Started kubelet" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.736631 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.737730 4826 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.737729 4826 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 07:36:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.739332 4826 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.741944 4826 server.go:460] "Adding debug handlers to kubelet server" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.743415 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.743512 4826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.743544 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:39:26.368022764 +0000 UTC Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.743681 4826 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.743690 4826 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.743767 4826 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.743855 4826 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.747551 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.747625 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.747891 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.748336 4826 factory.go:55] Registering systemd factory Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.748361 4826 factory.go:221] Registration of the systemd container factory successfully Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.749053 4826 factory.go:153] Registering CRI-O factory Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.749072 4826 factory.go:221] Registration of the crio container factory successfully Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.749144 4826 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.749170 4826 factory.go:103] Registering Raw factory Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.749186 4826 manager.go:1196] Started watching for new ooms in manager Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.749058 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fc09f3d0cb256 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:36:08.73646959 +0000 UTC m=+0.590355959,LastTimestamp:2026-01-31 07:36:08.73646959 +0000 UTC m=+0.590355959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.750504 4826 manager.go:319] Starting recovery of all containers Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761101 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761419 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761502 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761579 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761681 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761769 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.761850 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762012 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762105 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762184 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762268 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762348 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762426 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762505 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762583 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762667 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762744 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762821 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762896 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.762986 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763097 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763190 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763280 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763358 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763466 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763540 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763631 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763717 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763798 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.763916 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764021 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764159 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764240 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764318 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764405 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764482 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764562 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764645 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764726 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764801 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.764912 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.765027 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.765121 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.765212 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.765308 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.766602 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.766701 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.766791 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.767333 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.767428 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.767508 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.767586 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768176 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768266 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768294 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768319 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768341 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768361 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768382 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768400 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768422 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768444 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768470 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768494 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768519 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768549 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768570 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768592 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768614 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768632 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768652 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768680 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768706 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768730 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768756 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768775 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768793 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768812 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768831 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768852 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768878 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768905 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.768964 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769037 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769059 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769079 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769100 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769119 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769138 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769162 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769189 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769218 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769242 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769266 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769295 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769321 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769350 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769374 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769406 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769438 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769465 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769487 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769516 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769546 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769584 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769613 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769645 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769672 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769703 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769733 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769762 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769789 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769818 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769845 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769874 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769892 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769912 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769930 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.769956 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770014 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770033 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770053 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770072 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770093 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770111 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770130 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770169 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770197 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770220 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770239 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770261 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770281 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770301 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770319 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.770340 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.773722 4826 manager.go:324] Recovery completed Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.773960 4826 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774032 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774052 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774067 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774081 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774095 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774114 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774145 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774161 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774177 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774189 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774202 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774214 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774228 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774242 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774258 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774271 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774285 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774297 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774310 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774322 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774336 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774348 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774361 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774373 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774385 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774398 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774411 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774423 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774439 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774458 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774476 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774492 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774507 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774522 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774535 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774548 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774564 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774576 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774590 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774607 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774656 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774679 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774697 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774721 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774745 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774770 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774787 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774803 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774821 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774837 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774853 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774871 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774889 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774905 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774920 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774934 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774948 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.774960 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775042 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775056 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775071 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775083 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775108 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775122 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775135 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775157 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775172 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775200 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775228 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775249 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775270 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775288 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775302 4826 reconstruct.go:97] "Volume reconstruction finished" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.775313 4826 reconciler.go:26] "Reconciler: start to sync state" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.784499 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.789692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.789745 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.789759 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.797010 4826 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.797048 4826 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.797512 4826 state_mem.go:36] "Initialized new in-memory state store" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.804644 4826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.807604 4826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.807650 4826 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.807683 4826 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.807729 4826 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 07:36:08 crc kubenswrapper[4826]: W0131 07:36:08.810286 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.810406 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.813132 4826 policy_none.go:49] "None policy: Start" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.818216 4826 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.818256 4826 state_mem.go:35] "Initializing new in-memory state store" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.844824 4826 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.877814 4826 manager.go:334] "Starting Device Plugin manager" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.877901 4826 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.877923 4826 server.go:79] "Starting device plugin registration server" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.878742 4826 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.878792 4826 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.878987 4826 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.879098 4826 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.879107 4826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.894245 4826 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.908771 4826 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.909182 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.910910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.911035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.911135 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.911314 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.911684 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.911779 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.912211 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.912258 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.912277 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.912580 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.912871 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.912908 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.913848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.913896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.913915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914565 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.915321 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.914777 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.915414 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.915751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.915776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.915794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.915951 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.916276 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.916353 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917306 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917337 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917354 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917496 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917308 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917914 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.917939 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.918724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.918751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.918763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.948921 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.977946 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978013 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978043 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978069 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978094 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978115 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978135 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978208 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978257 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978384 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978562 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978604 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.978646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.979399 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.981340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.981393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.981413 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:08 crc kubenswrapper[4826]: I0131 07:36:08.981452 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:08 crc kubenswrapper[4826]: E0131 07:36:08.982360 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080337 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080377 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080459 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080510 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080533 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080556 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080579 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080602 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080593 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080650 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080619 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080682 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080683 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080700 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080765 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080739 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080769 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080798 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080731 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080893 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.080910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.081017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.081065 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.081168 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.081109 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.081215 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.182498 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.184609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.184690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.184712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.184763 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:09 crc kubenswrapper[4826]: E0131 07:36:09.185724 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.244909 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.262496 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.277708 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.301075 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.308292 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:09 crc kubenswrapper[4826]: W0131 07:36:09.310607 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6a65e28a3a9b8507758631909bbac56123c50993e2b58afd6c8c86645d529291 WatchSource:0}: Error finding container 6a65e28a3a9b8507758631909bbac56123c50993e2b58afd6c8c86645d529291: Status 404 returned error can't find the container with id 6a65e28a3a9b8507758631909bbac56123c50993e2b58afd6c8c86645d529291 Jan 31 07:36:09 crc kubenswrapper[4826]: W0131 07:36:09.316162 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-77c57ffca52dd081ef53b145d57f0cc6aee1de9e1385ba59ddb9d4acc57c2e89 WatchSource:0}: Error finding container 77c57ffca52dd081ef53b145d57f0cc6aee1de9e1385ba59ddb9d4acc57c2e89: Status 404 returned error can't find the container with id 77c57ffca52dd081ef53b145d57f0cc6aee1de9e1385ba59ddb9d4acc57c2e89 Jan 31 07:36:09 crc kubenswrapper[4826]: W0131 07:36:09.325578 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-aa076b90fea42e5fb1a195de9fb58a5d3d110da6c4882ff7e379d4d24ec34c7d WatchSource:0}: Error finding container aa076b90fea42e5fb1a195de9fb58a5d3d110da6c4882ff7e379d4d24ec34c7d: Status 404 returned error can't find the container with id aa076b90fea42e5fb1a195de9fb58a5d3d110da6c4882ff7e379d4d24ec34c7d Jan 31 07:36:09 crc kubenswrapper[4826]: E0131 07:36:09.350289 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.586357 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.588658 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.588730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.588750 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.588788 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:09 crc kubenswrapper[4826]: E0131 07:36:09.589364 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Jan 31 07:36:09 crc kubenswrapper[4826]: W0131 07:36:09.601715 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:09 crc kubenswrapper[4826]: E0131 07:36:09.601811 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.738313 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.744412 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:49:17.0306931 +0000 UTC Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.813736 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77c57ffca52dd081ef53b145d57f0cc6aee1de9e1385ba59ddb9d4acc57c2e89"} Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.815005 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a65e28a3a9b8507758631909bbac56123c50993e2b58afd6c8c86645d529291"} Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.815893 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b7676fa6d1b3cd20366a00cac0de2fcff4eff209e51bf467187bbeddbdda7cec"} Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.816906 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"080e6a5924a048c026aab04b919f44a43e287affdf1b51a445bd646eb3de18ff"} Jan 31 07:36:09 crc kubenswrapper[4826]: I0131 07:36:09.818572 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa076b90fea42e5fb1a195de9fb58a5d3d110da6c4882ff7e379d4d24ec34c7d"} Jan 31 07:36:09 crc kubenswrapper[4826]: W0131 07:36:09.861791 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:09 crc kubenswrapper[4826]: E0131 07:36:09.861881 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:10 crc kubenswrapper[4826]: W0131 07:36:10.119177 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:10 crc kubenswrapper[4826]: E0131 07:36:10.119299 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:10 crc kubenswrapper[4826]: E0131 07:36:10.150934 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Jan 31 07:36:10 crc kubenswrapper[4826]: W0131 07:36:10.331470 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:10 crc kubenswrapper[4826]: E0131 07:36:10.331581 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:10 crc kubenswrapper[4826]: E0131 07:36:10.363934 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fc09f3d0cb256 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:36:08.73646959 +0000 UTC m=+0.590355959,LastTimestamp:2026-01-31 07:36:08.73646959 +0000 UTC m=+0.590355959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.390068 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.392122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.392179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.392198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.392238 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:10 crc kubenswrapper[4826]: E0131 07:36:10.392928 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.738212 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.745291 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:56:14.334324465 +0000 UTC Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.754631 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:36:10 crc kubenswrapper[4826]: E0131 07:36:10.755656 4826 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.823599 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884" exitCode=0 Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.823681 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.823818 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.825393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.825425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.825436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.827093 4826 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4" exitCode=0 Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.827178 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.827238 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.828278 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.832188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.832225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.832238 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.832341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.832373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.832392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.833304 4826 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f" exitCode=0 Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.833499 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.834117 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.834729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.834782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.834806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.837895 4826 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b" exitCode=0 Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.837938 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.838514 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.840390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.840434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.840457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.842225 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.842272 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.842295 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.842314 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5"} Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.842436 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.845278 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.845493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:10 crc kubenswrapper[4826]: I0131 07:36:10.846076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:11 crc kubenswrapper[4826]: W0131 07:36:11.530154 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:11 crc kubenswrapper[4826]: E0131 07:36:11.530557 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.675486 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:11 crc kubenswrapper[4826]: W0131 07:36:11.716938 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:11 crc kubenswrapper[4826]: E0131 07:36:11.717091 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.737940 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.745641 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:27:51.381490266 +0000 UTC Jan 31 07:36:11 crc kubenswrapper[4826]: E0131 07:36:11.753829 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.853830 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.853864 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.853874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.853954 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.854948 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.854976 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.854986 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.857217 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.857241 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.857251 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.859214 4826 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36" exitCode=0 Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.859328 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.859349 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.860515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.860569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.860581 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.868810 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.869117 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.869488 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7"} Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.874238 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.874289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.874307 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.874392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.874423 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.874436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.993071 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.994847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.994911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.994923 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:11 crc kubenswrapper[4826]: I0131 07:36:11.994960 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:11 crc kubenswrapper[4826]: E0131 07:36:11.995510 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Jan 31 07:36:12 crc kubenswrapper[4826]: W0131 07:36:12.493391 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Jan 31 07:36:12 crc kubenswrapper[4826]: E0131 07:36:12.493513 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.745770 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:52:58.49819242 +0000 UTC Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.875763 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd"} Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.875804 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4"} Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.875918 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.877433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.877460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.877468 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879286 4826 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660" exitCode=0 Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879425 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879456 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879475 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879475 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660"} Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879425 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.879573 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881307 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881328 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881408 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881420 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:12 crc kubenswrapper[4826]: I0131 07:36:12.881433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.746000 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:52:11.90647839 +0000 UTC Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.746094 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.753837 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.892181 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.892254 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.892261 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.893047 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd"} Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.893231 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267"} Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.893255 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958"} Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.893617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.893653 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.893668 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.894119 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.894162 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:13 crc kubenswrapper[4826]: I0131 07:36:13.894179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.010493 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.388888 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.746634 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:56:37.538648337 +0000 UTC Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.781951 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.901009 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af"} Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.901058 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.901086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129"} Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.901111 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.901162 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.901276 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.902081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.902126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.902142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.902569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.902715 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.902826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.903122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.903186 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.903205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:14 crc kubenswrapper[4826]: I0131 07:36:14.989619 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.195687 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.197579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.197632 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.197652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.197683 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.639501 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.747375 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:28:10.629859352 +0000 UTC Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.903724 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.903747 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.904957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.905057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.905080 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.905631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.905709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:15 crc kubenswrapper[4826]: I0131 07:36:15.905746 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:16 crc kubenswrapper[4826]: I0131 07:36:16.748047 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:01:08.529135405 +0000 UTC Jan 31 07:36:16 crc kubenswrapper[4826]: I0131 07:36:16.906909 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:16 crc kubenswrapper[4826]: I0131 07:36:16.908303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:16 crc kubenswrapper[4826]: I0131 07:36:16.908362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:16 crc kubenswrapper[4826]: I0131 07:36:16.908379 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.010696 4826 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.010787 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.660800 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.661052 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.661108 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.662759 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.662813 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.662825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.748848 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:35:27.137774854 +0000 UTC Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.850445 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.909715 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.911009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.911061 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:17 crc kubenswrapper[4826]: I0131 07:36:17.911073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:18 crc kubenswrapper[4826]: I0131 07:36:18.749678 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:16:48.36412109 +0000 UTC Jan 31 07:36:18 crc kubenswrapper[4826]: I0131 07:36:18.826303 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:18 crc kubenswrapper[4826]: I0131 07:36:18.826590 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:18 crc kubenswrapper[4826]: I0131 07:36:18.828316 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:18 crc kubenswrapper[4826]: I0131 07:36:18.828354 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:18 crc kubenswrapper[4826]: I0131 07:36:18.828365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:18 crc kubenswrapper[4826]: E0131 07:36:18.894570 4826 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 07:36:19 crc kubenswrapper[4826]: I0131 07:36:19.750631 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:03:19.131081203 +0000 UTC Jan 31 07:36:20 crc kubenswrapper[4826]: I0131 07:36:20.751336 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:20:05.931627667 +0000 UTC Jan 31 07:36:20 crc kubenswrapper[4826]: I0131 07:36:20.966340 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 07:36:20 crc kubenswrapper[4826]: I0131 07:36:20.966607 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:20 crc kubenswrapper[4826]: I0131 07:36:20.968231 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:20 crc kubenswrapper[4826]: I0131 07:36:20.968540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:20 crc kubenswrapper[4826]: I0131 07:36:20.968631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:21 crc kubenswrapper[4826]: I0131 07:36:21.752160 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:26:11.755231252 +0000 UTC Jan 31 07:36:22 crc kubenswrapper[4826]: I0131 07:36:22.738529 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 07:36:22 crc kubenswrapper[4826]: I0131 07:36:22.753896 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:45:52.992777015 +0000 UTC Jan 31 07:36:22 crc kubenswrapper[4826]: W0131 07:36:22.864920 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 07:36:22 crc kubenswrapper[4826]: I0131 07:36:22.865389 4826 trace.go:236] Trace[1919355944]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 07:36:12.863) (total time: 10002ms): Jan 31 07:36:22 crc kubenswrapper[4826]: Trace[1919355944]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:36:22.864) Jan 31 07:36:22 crc kubenswrapper[4826]: Trace[1919355944]: [10.002019265s] [10.002019265s] END Jan 31 07:36:22 crc kubenswrapper[4826]: E0131 07:36:22.865622 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 07:36:23 crc kubenswrapper[4826]: I0131 07:36:23.503011 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 07:36:23 crc kubenswrapper[4826]: I0131 07:36:23.503076 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 07:36:23 crc kubenswrapper[4826]: I0131 07:36:23.508479 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 07:36:23 crc kubenswrapper[4826]: I0131 07:36:23.508544 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 07:36:23 crc kubenswrapper[4826]: I0131 07:36:23.753963 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:18:34.862356999 +0000 UTC Jan 31 07:36:24 crc kubenswrapper[4826]: I0131 07:36:24.398238 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:24 crc kubenswrapper[4826]: I0131 07:36:24.398470 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:24 crc kubenswrapper[4826]: I0131 07:36:24.400466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:24 crc kubenswrapper[4826]: I0131 07:36:24.400526 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:24 crc kubenswrapper[4826]: I0131 07:36:24.400547 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:24 crc kubenswrapper[4826]: I0131 07:36:24.754207 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:22:43.866563563 +0000 UTC Jan 31 07:36:25 crc kubenswrapper[4826]: I0131 07:36:25.754546 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:17:48.913430509 +0000 UTC Jan 31 07:36:26 crc kubenswrapper[4826]: I0131 07:36:26.755114 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:33:29.236050685 +0000 UTC Jan 31 07:36:26 crc kubenswrapper[4826]: I0131 07:36:26.813269 4826 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.010935 4826 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.011034 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.670846 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.671023 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.672211 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.672275 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.672292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.677960 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.755619 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:01:38.945860156 +0000 UTC Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.937508 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.938322 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.938385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:27 crc kubenswrapper[4826]: I0131 07:36:27.938409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.495871 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.498956 4826 trace.go:236] Trace[806713215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 07:36:15.045) (total time: 13453ms): Jan 31 07:36:28 crc kubenswrapper[4826]: Trace[806713215]: ---"Objects listed" error: 13453ms (07:36:28.498) Jan 31 07:36:28 crc kubenswrapper[4826]: Trace[806713215]: [13.453508185s] [13.453508185s] END Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.499049 4826 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.500555 4826 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.500987 4826 trace.go:236] Trace[2025215477]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 07:36:15.224) (total time: 13276ms): Jan 31 07:36:28 crc kubenswrapper[4826]: Trace[2025215477]: ---"Objects listed" error: 13276ms (07:36:28.500) Jan 31 07:36:28 crc kubenswrapper[4826]: Trace[2025215477]: [13.276299303s] [13.276299303s] END Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.501027 4826 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.501635 4826 trace.go:236] Trace[1162637173]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 07:36:17.151) (total time: 11350ms): Jan 31 07:36:28 crc kubenswrapper[4826]: Trace[1162637173]: ---"Objects listed" error: 11349ms (07:36:28.501) Jan 31 07:36:28 crc kubenswrapper[4826]: Trace[1162637173]: [11.350184531s] [11.350184531s] END Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.501682 4826 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.502922 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.507196 4826 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.532642 4826 csr.go:261] certificate signing request csr-kdw6n is approved, waiting to be issued Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.542034 4826 csr.go:257] certificate signing request csr-kdw6n is issued Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.555385 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35426->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.555439 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35440->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.555477 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35426->192.168.126.11:17697: read: connection reset by peer" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.555513 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35440->192.168.126.11:17697: read: connection reset by peer" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.556891 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.556939 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.557487 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.557546 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.585359 4826 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 07:36:28 crc kubenswrapper[4826]: W0131 07:36:28.585682 4826 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 07:36:28 crc kubenswrapper[4826]: W0131 07:36:28.585682 4826 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 07:36:28 crc kubenswrapper[4826]: W0131 07:36:28.585753 4826 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.585660 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.188fc09f4039f664\": read tcp 38.102.83.13:36372->38.102.83.13:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.188fc09f4039f664 default 26175 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:36:08 +0000 UTC,LastTimestamp:2026-01-31 07:36:08.913924564 +0000 UTC m=+0.767810963,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.740570 4826 apiserver.go:52] "Watching apiserver" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.745315 4826 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.745732 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.746224 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.746412 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.746513 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.746579 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.746662 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.747271 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.747507 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.747859 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.747917 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.749231 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.749640 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.751364 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.751436 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.751439 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.751729 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.751793 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.752106 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.752134 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.756513 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:05:59.772164953 +0000 UTC Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.791934 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.801904 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.815509 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.826818 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.843291 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.844692 4826 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.858567 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.872479 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.898921 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903359 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903416 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903480 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903509 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903605 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903646 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903707 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903789 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903826 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903853 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903882 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903834 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.903911 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904105 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904155 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904176 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904203 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904243 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904279 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904314 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904325 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904351 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904386 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904420 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904453 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904490 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904564 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904630 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904715 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904756 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904803 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904870 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904907 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904919 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.904943 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905008 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905033 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905049 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905080 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905075 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905125 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905167 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905201 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905212 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905233 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905322 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905357 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905360 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905384 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905408 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905429 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905426 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905429 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905451 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905505 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905530 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905559 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905599 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905683 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905714 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905746 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905779 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905811 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905843 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905874 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905903 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905934 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905964 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906022 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906052 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906126 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906159 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906187 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906217 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906248 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906283 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906327 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906364 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906397 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906431 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906462 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906494 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906528 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906570 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906601 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906634 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906666 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906748 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906789 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906820 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906860 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906892 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906937 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906995 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907031 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907064 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907094 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907175 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907210 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907243 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907305 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907338 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907370 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907399 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907428 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907459 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907488 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907517 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907546 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907615 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907649 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907683 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907714 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907745 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907776 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907815 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907848 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907879 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907909 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907943 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908002 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908038 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908068 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908095 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908125 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908160 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908337 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908412 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908448 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908480 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908514 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908615 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908650 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908684 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908720 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908757 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908791 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908823 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908857 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908889 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908923 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908958 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909198 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909229 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909279 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909313 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909343 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909375 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909407 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909438 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909496 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909535 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909568 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909602 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909637 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909709 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909794 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909828 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909863 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909898 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909931 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909988 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910027 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910060 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910090 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910119 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910149 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910179 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910210 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910238 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910268 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910300 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910329 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905555 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905672 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905714 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905874 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.905943 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906202 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906228 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906273 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906389 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906523 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906566 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906602 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906727 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910480 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906748 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906798 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906858 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906882 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.906919 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907107 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907235 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907343 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.907819 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908571 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908594 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908794 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908990 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909029 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909091 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909197 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.909318 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.908594 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910303 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910353 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910676 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910621 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911082 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911340 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911411 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911647 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911671 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.910362 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911840 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911877 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911906 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.911931 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912035 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912065 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912091 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912130 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912193 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912222 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912219 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912250 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912296 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912319 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912346 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912372 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912398 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912444 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912471 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912497 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912524 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912593 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912621 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912631 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912636 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912649 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912705 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912781 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912808 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912928 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.912994 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.913023 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.913048 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.913099 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.913128 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.913179 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.913985 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914454 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914517 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914569 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914606 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914635 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914725 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914716 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914731 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914774 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914866 4826 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914905 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.914923 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915052 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915120 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915263 4826 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915257 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915281 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915282 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915352 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915390 4826 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915414 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915436 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915536 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915557 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915579 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915672 4826 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915698 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915720 4826 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915740 4826 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915757 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915777 4826 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915868 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915892 4826 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915912 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915930 4826 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915948 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.915990 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916010 4826 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916029 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916048 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916066 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916062 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916082 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916135 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916160 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916182 4826 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916207 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916235 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916257 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916279 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916301 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916378 4826 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916406 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916429 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916451 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916473 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916496 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916519 4826 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916580 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916605 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916633 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916655 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916677 4826 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916768 4826 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916793 4826 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916815 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.917018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916252 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916482 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.916700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.917200 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.917347 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.917373 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:29.417354154 +0000 UTC m=+21.271240513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.917593 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.917617 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.918078 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.918005 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.918472 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.919431 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.919814 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.919881 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.920010 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.920223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.920632 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.921234 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.921315 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.921499 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.921569 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.921772 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.921849 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.923162 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.923924 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.924807 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.925319 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.925466 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.925863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.925916 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.926646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.926869 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.927058 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.927243 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.927447 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.927613 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.928107 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.928293 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.928435 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.928470 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.928709 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.928910 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.929223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.929242 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.929647 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.929674 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.929716 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.929833 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.930355 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.930674 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.930688 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.930827 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.931303 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.931634 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.931956 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.932118 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.932118 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.932256 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.932732 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.932811 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.933098 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:29.433078646 +0000 UTC m=+21.286965005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.933315 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.933354 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:29.433344654 +0000 UTC m=+21.287231203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.933543 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.933593 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.934677 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.935338 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.935491 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.935810 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.935945 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.936288 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.936602 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.936649 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.936940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.937730 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.938152 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.938476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.938550 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.939206 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.940271 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.940281 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.940336 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.940944 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.941547 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.941716 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.941748 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.941923 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.941924 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.942046 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.942074 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.942429 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.942497 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.942656 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.942880 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.943109 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.943662 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.944093 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.944204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.944301 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.944367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.944493 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.944654 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.944679 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.944695 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.944785 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:29.444759502 +0000 UTC m=+21.298646081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.947619 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.951416 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.951416 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.952232 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.952323 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.952643 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.954235 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.954246 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.954270 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.954301 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:28 crc kubenswrapper[4826]: E0131 07:36:28.954362 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:29.454342057 +0000 UTC m=+21.308228516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.954696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.954699 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.954860 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.955032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.956581 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.960470 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd" exitCode=255 Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.960539 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.960534 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd"} Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.960706 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.960731 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.960894 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.961360 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.961491 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.961856 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.961870 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.962182 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.962253 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.962608 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.962790 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.963089 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.963256 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.964013 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.964023 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.964395 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.964694 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.964901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.965176 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.965910 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.966578 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.966872 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.979261 4826 scope.go:117] "RemoveContainer" containerID="975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.979619 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.982437 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.988216 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.989012 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:28 crc kubenswrapper[4826]: I0131 07:36:28.998159 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.006475 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.008460 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.010482 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.013108 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.021340 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022039 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022145 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022192 4826 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022191 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022204 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022234 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022268 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022285 4826 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022303 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022337 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022345 4826 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022355 4826 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022363 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022390 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022399 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022407 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022416 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022424 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022433 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022441 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022467 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022476 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022484 4826 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022492 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022501 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022510 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022518 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022544 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022552 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022560 4826 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022568 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022577 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022586 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.022596 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023104 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023115 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023123 4826 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023131 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023139 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023149 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023158 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023166 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023175 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023185 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023193 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023201 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023209 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023216 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023224 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023231 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023240 4826 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023248 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023255 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023263 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023325 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023336 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023344 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023352 4826 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023379 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023389 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023424 4826 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023459 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023469 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023491 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023500 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023508 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023543 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023552 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023560 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023567 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023575 4826 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023897 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023914 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.023922 4826 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024061 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024075 4826 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024083 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024091 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024099 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024125 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024135 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024143 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024152 4826 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024161 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024170 4826 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024178 4826 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024186 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024213 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024222 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024231 4826 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024238 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024246 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024255 4826 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024263 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024290 4826 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024298 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024308 4826 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024316 4826 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024324 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024333 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024396 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024404 4826 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024413 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024422 4826 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024604 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024614 4826 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024645 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024654 4826 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024664 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024673 4826 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024683 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024691 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024717 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024728 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024736 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024744 4826 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024752 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024760 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024767 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024775 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024802 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024810 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024817 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024827 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024835 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024844 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024852 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024878 4826 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024886 4826 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024894 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024901 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024909 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024917 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024924 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024932 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024958 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024978 4826 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024986 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.024994 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025003 4826 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025011 4826 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025036 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025043 4826 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025051 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025060 4826 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.025068 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.032052 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.042911 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.051219 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.062457 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.067046 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.077929 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.084038 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 07:36:29 crc kubenswrapper[4826]: W0131 07:36:29.094252 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f7e1873745d4aafc19a24556c7747d1bbad42ca7568b42f4a9dacd82b1b26236 WatchSource:0}: Error finding container f7e1873745d4aafc19a24556c7747d1bbad42ca7568b42f4a9dacd82b1b26236: Status 404 returned error can't find the container with id f7e1873745d4aafc19a24556c7747d1bbad42ca7568b42f4a9dacd82b1b26236 Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.428760 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.428983 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:30.428933073 +0000 UTC m=+22.282819442 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.530244 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.530296 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.530319 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.530340 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530430 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530450 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530498 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530530 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530549 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530507 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530606 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530470 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530511 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:30.530494322 +0000 UTC m=+22.384380681 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530685 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:30.530670227 +0000 UTC m=+22.384556596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530703 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:30.530693447 +0000 UTC m=+22.384579826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.530721 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:30.530712678 +0000 UTC m=+22.384599057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.544247 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 07:31:28 +0000 UTC, rotation deadline is 2026-12-22 14:48:50.089001248 +0000 UTC Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.544335 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7807h12m20.544670799s for next certificate rotation Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.756927 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:45:57.578062581 +0000 UTC Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.808794 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:29 crc kubenswrapper[4826]: E0131 07:36:29.808916 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.964660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.964712 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.964725 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f7e1873745d4aafc19a24556c7747d1bbad42ca7568b42f4a9dacd82b1b26236"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.966015 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.966078 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"73113473c899085e24407b854efd5d6dafac52090521d64aa3b4789b0a9a7cf2"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.967909 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.969563 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.969839 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.970495 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f6178763929a6baad52ab0442457a54786062921fac4c702fd7200074d6e11a6"} Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.976228 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:29 crc kubenswrapper[4826]: I0131 07:36:29.989338 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.002581 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.018227 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.033026 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.052395 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.080100 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.098822 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8tp2c"] Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.099186 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.100139 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.101766 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.104072 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.104330 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.133361 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.165577 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.185356 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.201387 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.212856 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.225706 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.236584 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2-hosts-file\") pod \"node-resolver-8tp2c\" (UID: \"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\") " pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.236624 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x55d\" (UniqueName: \"kubernetes.io/projected/0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2-kube-api-access-5x55d\") pod \"node-resolver-8tp2c\" (UID: \"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\") " pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.240302 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.252297 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.261555 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.269545 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.280789 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.294436 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.304384 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.314631 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.336988 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2-hosts-file\") pod \"node-resolver-8tp2c\" (UID: \"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\") " pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.337037 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x55d\" (UniqueName: \"kubernetes.io/projected/0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2-kube-api-access-5x55d\") pod \"node-resolver-8tp2c\" (UID: \"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\") " pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.337131 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2-hosts-file\") pod \"node-resolver-8tp2c\" (UID: \"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\") " pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.437462 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.437674 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:32.437643587 +0000 UTC m=+24.291529976 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.449297 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x55d\" (UniqueName: \"kubernetes.io/projected/0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2-kube-api-access-5x55d\") pod \"node-resolver-8tp2c\" (UID: \"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\") " pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.495848 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8v6ng"] Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.496353 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5fm7w"] Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.496551 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.497746 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wtbb9"] Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.498130 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.498241 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.498297 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.498343 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.498358 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.498410 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.502436 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.503174 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.503258 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.503397 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.503481 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.503892 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.503945 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.504223 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.521045 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.537710 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.538069 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.538108 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.538139 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.538169 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538221 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538230 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538250 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538260 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538271 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:32.538255928 +0000 UTC m=+24.392142277 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538283 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538286 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538301 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:32.538287428 +0000 UTC m=+24.392173787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538311 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538325 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538330 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:32.538315659 +0000 UTC m=+24.392202038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.538362 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:32.53835153 +0000 UTC m=+24.392237899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.566484 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.579445 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.597045 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.610433 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.624517 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.635385 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.638911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-cnibin\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.638947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-hostroot\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.638988 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvwb\" (UniqueName: \"kubernetes.io/projected/b672fd90-a70c-4f27-b711-e58f269efccd-kube-api-access-qfvwb\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639012 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-os-release\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639032 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b672fd90-a70c-4f27-b711-e58f269efccd-cni-binary-copy\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639051 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed10f53b-565a-4d14-a1d8-feabc15f08ea-mcd-auth-proxy-config\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639073 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639094 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-netns\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639115 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-conf-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639161 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b672fd90-a70c-4f27-b711-e58f269efccd-multus-daemon-config\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639205 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-system-cni-dir\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639227 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-os-release\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-kubelet\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639269 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cni-binary-copy\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639289 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpws5\" (UniqueName: \"kubernetes.io/projected/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-kube-api-access-kpws5\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639329 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-system-cni-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639378 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-socket-dir-parent\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-k8s-cni-cncf-io\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639415 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed10f53b-565a-4d14-a1d8-feabc15f08ea-proxy-tls\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639439 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tlc\" (UniqueName: \"kubernetes.io/projected/ed10f53b-565a-4d14-a1d8-feabc15f08ea-kube-api-access-27tlc\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639460 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-cni-multus\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-etc-kubernetes\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639520 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-cni-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639569 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-cni-bin\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cnibin\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639609 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed10f53b-565a-4d14-a1d8-feabc15f08ea-rootfs\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.639632 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-multus-certs\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.646921 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.658430 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.682830 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.704108 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.711808 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8tp2c" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.724319 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: W0131 07:36:30.724788 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d1fbcfd_bce9_4f1c_bc64_d48b979a95d2.slice/crio-b3ae0a5540c1bc0997d8f63695843cc942e2225ba1ae0dcccbf0488ae3a721e9 WatchSource:0}: Error finding container b3ae0a5540c1bc0997d8f63695843cc942e2225ba1ae0dcccbf0488ae3a721e9: Status 404 returned error can't find the container with id b3ae0a5540c1bc0997d8f63695843cc942e2225ba1ae0dcccbf0488ae3a721e9 Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-cnibin\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740810 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-hostroot\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740828 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvwb\" (UniqueName: \"kubernetes.io/projected/b672fd90-a70c-4f27-b711-e58f269efccd-kube-api-access-qfvwb\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-os-release\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740888 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-cnibin\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740867 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b672fd90-a70c-4f27-b711-e58f269efccd-cni-binary-copy\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740937 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed10f53b-565a-4d14-a1d8-feabc15f08ea-mcd-auth-proxy-config\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740889 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-hostroot\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.740956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-os-release\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741163 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-netns\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-conf-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741258 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b672fd90-a70c-4f27-b711-e58f269efccd-multus-daemon-config\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741281 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-system-cni-dir\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741305 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-os-release\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-kubelet\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741356 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cni-binary-copy\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741277 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-netns\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741379 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpws5\" (UniqueName: \"kubernetes.io/projected/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-kube-api-access-kpws5\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741354 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-system-cni-dir\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741306 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-conf-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741419 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed10f53b-565a-4d14-a1d8-feabc15f08ea-proxy-tls\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741886 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-system-cni-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741911 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-socket-dir-parent\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-k8s-cni-cncf-io\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741954 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tlc\" (UniqueName: \"kubernetes.io/projected/ed10f53b-565a-4d14-a1d8-feabc15f08ea-kube-api-access-27tlc\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.741989 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-cni-multus\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742005 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-etc-kubernetes\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742059 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-etc-kubernetes\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b672fd90-a70c-4f27-b711-e58f269efccd-cni-binary-copy\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742443 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-k8s-cni-cncf-io\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742466 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b672fd90-a70c-4f27-b711-e58f269efccd-multus-daemon-config\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742563 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742603 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-system-cni-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed10f53b-565a-4d14-a1d8-feabc15f08ea-mcd-auth-proxy-config\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742709 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-cni-multus\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742714 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-socket-dir-parent\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742733 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-cni-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742802 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-multus-cni-dir\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742834 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-cni-bin\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742871 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cnibin\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742877 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-kubelet\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742908 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed10f53b-565a-4d14-a1d8-feabc15f08ea-rootfs\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.742942 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-multus-certs\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.743053 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-run-multus-certs\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.743093 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b672fd90-a70c-4f27-b711-e58f269efccd-host-var-lib-cni-bin\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.743140 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cnibin\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.743177 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed10f53b-565a-4d14-a1d8-feabc15f08ea-rootfs\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.743230 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-os-release\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.743563 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-cni-binary-copy\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.744716 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.751550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed10f53b-565a-4d14-a1d8-feabc15f08ea-proxy-tls\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.752853 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.758348 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:36:24.174712653 +0000 UTC Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.774637 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpws5\" (UniqueName: \"kubernetes.io/projected/7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92-kube-api-access-kpws5\") pod \"multus-additional-cni-plugins-5fm7w\" (UID: \"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\") " pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.774761 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvwb\" (UniqueName: \"kubernetes.io/projected/b672fd90-a70c-4f27-b711-e58f269efccd-kube-api-access-qfvwb\") pod \"multus-wtbb9\" (UID: \"b672fd90-a70c-4f27-b711-e58f269efccd\") " pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.775096 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tlc\" (UniqueName: \"kubernetes.io/projected/ed10f53b-565a-4d14-a1d8-feabc15f08ea-kube-api-access-27tlc\") pod \"machine-config-daemon-8v6ng\" (UID: \"ed10f53b-565a-4d14-a1d8-feabc15f08ea\") " pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.777062 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.799462 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.808844 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.808991 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.809065 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:30 crc kubenswrapper[4826]: E0131 07:36:30.809176 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.813919 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.814429 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.815152 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.815583 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.816162 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.817069 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.817526 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.818085 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.818982 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.819373 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.819567 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.820849 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.821338 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.822919 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.823399 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.823905 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.825398 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.825923 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.826873 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.827417 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.827998 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.831176 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.832293 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.833201 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.834042 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.834652 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: W0131 07:36:30.836083 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded10f53b_565a_4d14_a1d8_feabc15f08ea.slice/crio-eb7710855158da3eb4e0265869dd1bd94219e9e75cedb4e7bc6a4f6370a3bba2 WatchSource:0}: Error finding container eb7710855158da3eb4e0265869dd1bd94219e9e75cedb4e7bc6a4f6370a3bba2: Status 404 returned error can't find the container with id eb7710855158da3eb4e0265869dd1bd94219e9e75cedb4e7bc6a4f6370a3bba2 Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.836509 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.836852 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.837096 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.838537 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wtbb9" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.838801 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.840015 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.841365 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.842469 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.843931 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.844652 4826 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.844791 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.847652 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.848343 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.848887 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.852183 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.852285 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.853946 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.855097 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.856606 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.857584 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.858828 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.859749 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.861341 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.862537 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.863913 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.864719 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.865724 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.866207 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.867797 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.868530 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.869637 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.870755 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.871511 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.872867 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.874097 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 07:36:30 crc kubenswrapper[4826]: W0131 07:36:30.874215 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb672fd90_a70c_4f27_b711_e58f269efccd.slice/crio-7ea87cac74a14cfe4489c437357e0a7a339f419f9f983b9cb6a7017c9466bd5f WatchSource:0}: Error finding container 7ea87cac74a14cfe4489c437357e0a7a339f419f9f983b9cb6a7017c9466bd5f: Status 404 returned error can't find the container with id 7ea87cac74a14cfe4489c437357e0a7a339f419f9f983b9cb6a7017c9466bd5f Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.874730 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvwnb"] Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.875787 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880080 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880187 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880255 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880310 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880406 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880511 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.880556 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.901388 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.916315 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.928797 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.950660 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.966314 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.980349 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.982226 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerStarted","Data":"dd43e2e44e984024e3d0450f6a8fa22e71e803e0202a4d0fd378d63e07bfcf75"} Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.993984 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:30Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.995993 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 07:36:30 crc kubenswrapper[4826]: I0131 07:36:30.999616 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"eb7710855158da3eb4e0265869dd1bd94219e9e75cedb4e7bc6a4f6370a3bba2"} Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.001956 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8tp2c" event={"ID":"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2","Type":"ContainerStarted","Data":"b3ae0a5540c1bc0997d8f63695843cc942e2225ba1ae0dcccbf0488ae3a721e9"} Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.006852 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerStarted","Data":"7ea87cac74a14cfe4489c437357e0a7a339f419f9f983b9cb6a7017c9466bd5f"} Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.007867 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.016727 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.017276 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.025569 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.045466 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047094 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-kubelet\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047428 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-slash\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047449 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-node-log\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047476 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5db04412-b62f-417c-91ac-776767d6102f-ovn-node-metrics-cert\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047495 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-etc-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047516 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047531 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-log-socket\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047548 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-env-overrides\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047563 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047652 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-var-lib-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047671 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-config\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047686 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-ovn\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047717 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvlrl\" (UniqueName: \"kubernetes.io/projected/5db04412-b62f-417c-91ac-776767d6102f-kube-api-access-wvlrl\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047733 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-netns\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-systemd-units\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047771 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-netd\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047794 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-systemd\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-script-lib\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.047828 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-bin\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.059411 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.073798 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.090222 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.102683 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.116345 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.128867 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148658 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-netd\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148722 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-systemd\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148739 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-script-lib\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148758 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-bin\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148777 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-node-log\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148800 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-kubelet\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148820 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-slash\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148847 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5db04412-b62f-417c-91ac-776767d6102f-ovn-node-metrics-cert\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148873 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-etc-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148887 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-env-overrides\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148901 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148922 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-log-socket\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148937 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148953 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-var-lib-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148987 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-config\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-ovn\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149017 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149032 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvlrl\" (UniqueName: \"kubernetes.io/projected/5db04412-b62f-417c-91ac-776767d6102f-kube-api-access-wvlrl\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149055 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-netns\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-systemd-units\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149147 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-systemd-units\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149180 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-netd\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149202 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-systemd\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149433 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-log-socket\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149475 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-slash\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149546 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-bin\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149572 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-node-log\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149594 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-kubelet\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149850 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-script-lib\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149887 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-var-lib-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.149991 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.148892 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.150148 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-ovn\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.150175 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-env-overrides\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.150177 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.150200 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-etc-openvswitch\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.150228 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-netns\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.150356 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-config\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.154061 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5db04412-b62f-417c-91ac-776767d6102f-ovn-node-metrics-cert\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.162789 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.165449 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvlrl\" (UniqueName: \"kubernetes.io/projected/5db04412-b62f-417c-91ac-776767d6102f-kube-api-access-wvlrl\") pod \"ovnkube-node-qvwnb\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.176294 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.187188 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.192886 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.201218 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: W0131 07:36:31.204582 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db04412_b62f_417c_91ac_776767d6102f.slice/crio-6ce3aa4663864561f147e7b740f16c73f1b27265c6ec1a88d44c097a92983841 WatchSource:0}: Error finding container 6ce3aa4663864561f147e7b740f16c73f1b27265c6ec1a88d44c097a92983841: Status 404 returned error can't find the container with id 6ce3aa4663864561f147e7b740f16c73f1b27265c6ec1a88d44c097a92983841 Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.221840 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.237896 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.250754 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.272512 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.758484 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:57:06.355042331 +0000 UTC Jan 31 07:36:31 crc kubenswrapper[4826]: I0131 07:36:31.807865 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:31 crc kubenswrapper[4826]: E0131 07:36:31.808007 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.018668 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6lwnf"] Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.019232 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.021618 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.021651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.023408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8tp2c" event={"ID":"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2","Type":"ContainerStarted","Data":"91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.024614 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.025219 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.025717 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.026324 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.027194 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" exitCode=0 Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.027279 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.027326 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"6ce3aa4663864561f147e7b740f16c73f1b27265c6ec1a88d44c097a92983841"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.031373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.035874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerStarted","Data":"3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.037215 4826 generic.go:334] "Generic (PLEG): container finished" podID="7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92" containerID="c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db" exitCode=0 Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.037761 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerDied","Data":"c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db"} Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.045417 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.065265 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.085962 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.095416 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.105472 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.115642 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.124557 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.141521 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.150873 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.159289 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-host\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.159911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-serviceca\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.163792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9x7\" (UniqueName: \"kubernetes.io/projected/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-kube-api-access-wm9x7\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.164011 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.175623 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.186713 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.198121 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.209656 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.223553 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.236474 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.248904 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.261185 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.264329 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm9x7\" (UniqueName: \"kubernetes.io/projected/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-kube-api-access-wm9x7\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.264384 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-host\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.264400 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-serviceca\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.264480 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-host\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.265623 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-serviceca\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.277592 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.286121 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.287306 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm9x7\" (UniqueName: \"kubernetes.io/projected/4b53fa37-2f8b-49d4-bd96-2bfe008beba7-kube-api-access-wm9x7\") pod \"node-ca-6lwnf\" (UID: \"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\") " pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.298060 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.310165 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.320985 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.332471 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.356150 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.367610 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.402682 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.433610 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6lwnf" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.446504 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:32 crc kubenswrapper[4826]: W0131 07:36:32.450913 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b53fa37_2f8b_49d4_bd96_2bfe008beba7.slice/crio-ac61d826f1a576bc8f37aa0718b47010021cfd73ed5d597c3ada3a9f46099264 WatchSource:0}: Error finding container ac61d826f1a576bc8f37aa0718b47010021cfd73ed5d597c3ada3a9f46099264: Status 404 returned error can't find the container with id ac61d826f1a576bc8f37aa0718b47010021cfd73ed5d597c3ada3a9f46099264 Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.466008 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.466352 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:36.466336426 +0000 UTC m=+28.320222785 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.567445 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.567507 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.567605 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.567657 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:36.567638617 +0000 UTC m=+28.421524976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.567769 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.567809 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:36.567797501 +0000 UTC m=+28.421683860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569270 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569295 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569308 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.569335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569352 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:36.569339065 +0000 UTC m=+28.423225424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.569392 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569474 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569490 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569500 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.569539 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:36.569529471 +0000 UTC m=+28.423415830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.758745 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:11:31.84468312 +0000 UTC Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.808298 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.808412 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:32 crc kubenswrapper[4826]: I0131 07:36:32.808529 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:32 crc kubenswrapper[4826]: E0131 07:36:32.808586 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.044213 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.044673 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.044691 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.044704 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.044715 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.044732 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.045325 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6lwnf" event={"ID":"4b53fa37-2f8b-49d4-bd96-2bfe008beba7","Type":"ContainerStarted","Data":"4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.045376 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6lwnf" event={"ID":"4b53fa37-2f8b-49d4-bd96-2bfe008beba7","Type":"ContainerStarted","Data":"ac61d826f1a576bc8f37aa0718b47010021cfd73ed5d597c3ada3a9f46099264"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.047737 4826 generic.go:334] "Generic (PLEG): container finished" podID="7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92" containerID="58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca" exitCode=0 Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.047872 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerDied","Data":"58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca"} Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.058370 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.071660 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.082037 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.097686 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.107212 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.119317 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.133267 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.148834 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.161108 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.173668 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.185310 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.197770 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.215064 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.226085 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.237076 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.254808 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.264064 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.280214 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.291992 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.303371 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.314236 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.327845 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.363647 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.400203 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.436490 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.484240 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.517840 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.562993 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.759937 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:05:37.172275083 +0000 UTC Jan 31 07:36:33 crc kubenswrapper[4826]: I0131 07:36:33.808539 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:33 crc kubenswrapper[4826]: E0131 07:36:33.808654 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.015133 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.021850 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.024719 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.032567 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.045345 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.053441 4826 generic.go:334] "Generic (PLEG): container finished" podID="7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92" containerID="85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f" exitCode=0 Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.053497 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerDied","Data":"85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f"} Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.059506 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: E0131 07:36:34.060179 4826 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.087366 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.099019 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.113283 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.123887 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.138528 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.151668 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.165350 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.176548 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.189950 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.210040 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.222485 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.231851 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.245665 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.275135 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.316352 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.355320 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.406103 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.481870 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.498480 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.516550 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.561138 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.606003 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.640744 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.676177 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.721719 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.758323 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.761044 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:32:32.662422889 +0000 UTC Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.808805 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:34 crc kubenswrapper[4826]: E0131 07:36:34.809016 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.809222 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:34 crc kubenswrapper[4826]: E0131 07:36:34.810939 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.903369 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.905782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.905826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.905844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.906002 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.912647 4826 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.912941 4826 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.914346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.914391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.914402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.914420 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.914433 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:34Z","lastTransitionTime":"2026-01-31T07:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:34 crc kubenswrapper[4826]: E0131 07:36:34.937659 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.942227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.942282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.942302 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.942327 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.942345 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:34Z","lastTransitionTime":"2026-01-31T07:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:34 crc kubenswrapper[4826]: E0131 07:36:34.963178 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.968892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.968956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.969062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.969095 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.969117 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:34Z","lastTransitionTime":"2026-01-31T07:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:34 crc kubenswrapper[4826]: E0131 07:36:34.986398 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:34Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.993890 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.993955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.993994 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.994020 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:34 crc kubenswrapper[4826]: I0131 07:36:34.994034 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:34Z","lastTransitionTime":"2026-01-31T07:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: E0131 07:36:35.012801 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.018182 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.018240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.018265 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.018298 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.018328 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: E0131 07:36:35.037180 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: E0131 07:36:35.037345 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.039355 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.039500 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.039597 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.039693 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.039812 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.059086 4826 generic.go:334] "Generic (PLEG): container finished" podID="7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92" containerID="aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7" exitCode=0 Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.059177 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerDied","Data":"aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.076201 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.104027 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.118038 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.135728 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.142931 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.143002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.143019 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.143037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.143049 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.153670 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.166281 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.182325 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.193946 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.204949 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.218620 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.242103 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.248114 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.248146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.248155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.248191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.248202 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.274094 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.315293 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.350498 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.350526 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.350535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.350547 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.350557 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.359020 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.396098 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:35Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.453251 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.453284 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.453292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.453304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.453313 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.557205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.557286 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.557326 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.557360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.557387 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.660907 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.660962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.661017 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.661046 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.661064 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.762353 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:26:20.495572839 +0000 UTC Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.765193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.765261 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.765289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.765321 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.765345 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.809691 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:35 crc kubenswrapper[4826]: E0131 07:36:35.810122 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.869713 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.871244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.871740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.872060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.873197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.976655 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.976719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.976743 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.976777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:35 crc kubenswrapper[4826]: I0131 07:36:35.976799 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:35Z","lastTransitionTime":"2026-01-31T07:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.069875 4826 generic.go:334] "Generic (PLEG): container finished" podID="7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92" containerID="b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e" exitCode=0 Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.069998 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerDied","Data":"b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.080046 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.080105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.080126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.080155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.080183 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.081746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.094079 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.107288 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.128815 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.146282 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.169681 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.183758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.183824 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.183840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.183864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.183880 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.184726 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.198514 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.214039 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.228958 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.241700 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.259315 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.288291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.288335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.288347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.288367 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.288379 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.293355 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.314810 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.330387 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.348679 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.391334 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.391366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.391379 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.391393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.391402 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.494336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.494370 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.494379 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.494394 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.494405 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.510099 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.510274 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:36:44.510261379 +0000 UTC m=+36.364147738 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.600219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.600309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.600326 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.600346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.600365 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.610697 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.610848 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611098 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611120 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611193 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:44.611168218 +0000 UTC m=+36.465054617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.611050 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.611268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.611332 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611438 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611467 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611482 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611525 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:44.611510148 +0000 UTC m=+36.465396537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611631 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611725 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:44.611707174 +0000 UTC m=+36.465593533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.611870 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.612117 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:44.612096025 +0000 UTC m=+36.465982394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.702569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.702615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.702628 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.702659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.702674 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.763209 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:54:24.228713066 +0000 UTC Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.804633 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.804676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.804688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.804704 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.804716 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.807892 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.807920 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.808025 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:36 crc kubenswrapper[4826]: E0131 07:36:36.808104 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.907495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.907552 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.907568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.907593 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:36 crc kubenswrapper[4826]: I0131 07:36:36.907611 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:36Z","lastTransitionTime":"2026-01-31T07:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.009847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.009898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.009911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.009929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.009941 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.090119 4826 generic.go:334] "Generic (PLEG): container finished" podID="7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92" containerID="bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc" exitCode=0 Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.090203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerDied","Data":"bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.113429 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.114386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.114457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.114479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.114508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.114531 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.137961 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.187543 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.204688 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.217138 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.217179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.217194 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.217213 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.217227 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.233700 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.246214 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.263253 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.277070 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.285928 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.302473 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.315846 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.322297 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.322341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.322353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.322371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.322385 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.326381 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.340615 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.353645 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.370959 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:37Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.424590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.424659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.424680 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.424704 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.424722 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.527909 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.528033 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.528066 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.528105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.528130 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.630998 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.631058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.631083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.631114 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.631137 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.734669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.734708 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.734720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.734737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.734748 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.764282 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:11:20.264205624 +0000 UTC Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.808726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:37 crc kubenswrapper[4826]: E0131 07:36:37.808909 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.837932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.838102 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.838161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.838195 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.838217 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.943104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.943185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.943203 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.943230 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:37 crc kubenswrapper[4826]: I0131 07:36:37.943249 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:37Z","lastTransitionTime":"2026-01-31T07:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.046922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.047049 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.047076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.047104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.047123 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.100079 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" event={"ID":"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92","Type":"ContainerStarted","Data":"baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.107124 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.107574 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.107616 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.124264 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.143793 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.149191 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.150185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.150221 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.150236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.150255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.150271 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.151478 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.165444 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.181577 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.200230 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.224621 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.241314 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.252329 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.252431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.252466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.252543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.252583 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.258047 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.275748 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.292898 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.307420 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.333684 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.346798 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.354486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.354536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.354555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.354578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.354596 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.364652 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.380816 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.393622 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.408661 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.422031 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.434765 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.456600 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.456667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.456690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.456714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.456729 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.465180 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.478928 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.498415 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.513370 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.525998 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.552919 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.558925 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.559015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.559038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.559074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.559092 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.563437 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.580038 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.593058 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.604191 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.614659 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.661438 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.661489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.661502 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.661520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.661531 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.764421 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.764478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.764407 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:54:01.356380122 +0000 UTC Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.764496 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.764575 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.764600 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.807952 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.808065 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:38 crc kubenswrapper[4826]: E0131 07:36:38.808109 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:38 crc kubenswrapper[4826]: E0131 07:36:38.808225 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.821421 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.834637 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.848296 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.862126 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.866350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.866392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.866402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.866417 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.866429 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.888602 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.904862 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.923932 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.938994 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.956740 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.968012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.968050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.968060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.968076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.968086 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:38Z","lastTransitionTime":"2026-01-31T07:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:38 crc kubenswrapper[4826]: I0131 07:36:38.979178 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.000914 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.016868 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.037096 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.067137 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.074373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.074447 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.074468 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.074492 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.074510 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.083400 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.111807 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.176681 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.176735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.176754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.176783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.176807 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.280146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.280222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.280239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.280294 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.280312 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.383079 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.383128 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.383141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.383159 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.383171 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.473256 4826 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.486425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.486464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.486474 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.486488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.486498 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.589202 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.589287 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.589309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.589342 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.589368 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.691936 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.691988 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.692001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.692016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.692028 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.764570 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:14:44.653074627 +0000 UTC Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.794545 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.794619 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.794634 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.794654 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.794669 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.808557 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:39 crc kubenswrapper[4826]: E0131 07:36:39.808776 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.897257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.897286 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.897295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.897308 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:39 crc kubenswrapper[4826]: I0131 07:36:39.897317 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:39Z","lastTransitionTime":"2026-01-31T07:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.000076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.000160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.000190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.000225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.000248 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.102543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.102593 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.102604 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.102622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.102632 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.117173 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.204535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.204586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.204599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.204618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.204630 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.307350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.307390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.307401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.307416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.307428 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.409795 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.409828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.409840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.409855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.409866 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.511754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.512012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.512095 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.512159 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.512211 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.615876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.615955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.616021 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.616051 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.616069 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.718873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.719148 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.719262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.719346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.719423 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.764721 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:51:20.689269023 +0000 UTC Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.808227 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:40 crc kubenswrapper[4826]: E0131 07:36:40.808428 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.809182 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:40 crc kubenswrapper[4826]: E0131 07:36:40.809444 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.822788 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.822843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.822856 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.822876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.822888 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.925482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.925551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.925570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.925595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:40 crc kubenswrapper[4826]: I0131 07:36:40.925612 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:40Z","lastTransitionTime":"2026-01-31T07:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.028949 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.029052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.029070 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.029093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.029112 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.077495 4826 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.123176 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/0.log" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.127944 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14" exitCode=1 Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.128039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.129564 4826 scope.go:117] "RemoveContainer" containerID="3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.132062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.132108 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.132124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.132145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.132161 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.148419 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.163581 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.172114 4826 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.179765 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.201465 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.216349 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.236920 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.238796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.238846 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.238864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.238889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.238905 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.259741 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.273433 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.286197 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.298479 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.313190 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.323465 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.341042 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.341078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.341087 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.341101 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.341110 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.347550 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:40Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:36:40.324549 6108 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:40.324570 6108 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:40.324597 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:36:40.324605 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:40.324765 6108 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:40.324815 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:40.324830 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:40.324837 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:40.324840 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:36:40.324855 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:40.324854 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:40.324874 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:40.324896 6108 factory.go:656] Stopping watch factory\\\\nI0131 07:36:40.324899 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:40.324916 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.357202 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.369543 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:41Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.444197 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.444256 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.444272 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.444295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.444312 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.547074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.547134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.547153 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.547178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.547197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.650012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.650062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.650073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.650091 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.650106 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.752646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.752694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.752747 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.752774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.752792 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.765878 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:44:18.681536943 +0000 UTC Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.808656 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:41 crc kubenswrapper[4826]: E0131 07:36:41.809109 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.855341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.855381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.855389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.855401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.855410 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.957491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.957533 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.957549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.957572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:41 crc kubenswrapper[4826]: I0131 07:36:41.957588 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:41Z","lastTransitionTime":"2026-01-31T07:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.060280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.060345 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.060359 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.060375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.060387 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.132904 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/0.log" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.135994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.136101 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.143327 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj"] Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.144010 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.146479 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.146744 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.162621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.162671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.162685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.162706 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.162721 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.164205 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.178820 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.192361 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.208386 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.234573 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.251827 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.265154 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.265209 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.265225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.265245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.265260 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.273933 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.279663 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/32b69655-b895-4001-8f50-17a9d73056f9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.279899 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/32b69655-b895-4001-8f50-17a9d73056f9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.280079 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xcl\" (UniqueName: \"kubernetes.io/projected/32b69655-b895-4001-8f50-17a9d73056f9-kube-api-access-l6xcl\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.280233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/32b69655-b895-4001-8f50-17a9d73056f9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.298623 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.311896 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.323658 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.342798 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.363401 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.372074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.372113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.372127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.372147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.372161 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.381489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/32b69655-b895-4001-8f50-17a9d73056f9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.381559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6xcl\" (UniqueName: \"kubernetes.io/projected/32b69655-b895-4001-8f50-17a9d73056f9-kube-api-access-l6xcl\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.381596 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/32b69655-b895-4001-8f50-17a9d73056f9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.381662 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/32b69655-b895-4001-8f50-17a9d73056f9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.382562 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/32b69655-b895-4001-8f50-17a9d73056f9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.382699 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/32b69655-b895-4001-8f50-17a9d73056f9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.389501 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/32b69655-b895-4001-8f50-17a9d73056f9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.393870 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.413659 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6xcl\" (UniqueName: \"kubernetes.io/projected/32b69655-b895-4001-8f50-17a9d73056f9-kube-api-access-l6xcl\") pod \"ovnkube-control-plane-749d76644c-h7xbj\" (UID: \"32b69655-b895-4001-8f50-17a9d73056f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.427330 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.452689 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:40Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:36:40.324549 6108 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:40.324570 6108 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:40.324597 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:36:40.324605 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:40.324765 6108 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:40.324815 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:40.324830 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:40.324837 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:40.324840 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:36:40.324855 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:40.324854 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:40.324874 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:40.324896 6108 factory.go:656] Stopping watch factory\\\\nI0131 07:36:40.324899 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:40.324916 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.459602 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.470269 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.477588 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.477620 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.477629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.477645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.477656 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.494748 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.506918 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.516887 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.543218 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.565401 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.578240 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.579872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.579911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.579927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.580004 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.580139 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.604003 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.619201 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.632305 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.644595 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.657627 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.681935 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:40Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:36:40.324549 6108 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:40.324570 6108 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:40.324597 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:36:40.324605 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:40.324765 6108 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:40.324815 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:40.324830 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:40.324837 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:40.324840 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:36:40.324855 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:40.324854 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:40.324874 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:40.324896 6108 factory.go:656] Stopping watch factory\\\\nI0131 07:36:40.324899 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:40.324916 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.682933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.683020 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.683039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.683063 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.683080 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.695070 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.709112 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.722426 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:42Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.766843 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:51:50.5362265 +0000 UTC Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.785524 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.785554 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.785565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.785580 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.785592 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.808437 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:42 crc kubenswrapper[4826]: E0131 07:36:42.808564 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.808908 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:42 crc kubenswrapper[4826]: E0131 07:36:42.808957 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.888117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.888162 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.888173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.888191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.888202 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.990378 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.990425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.990435 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.990450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:42 crc kubenswrapper[4826]: I0131 07:36:42.990460 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:42Z","lastTransitionTime":"2026-01-31T07:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.093152 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.093190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.093199 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.093215 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.093223 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.141655 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" event={"ID":"32b69655-b895-4001-8f50-17a9d73056f9","Type":"ContainerStarted","Data":"15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.141699 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" event={"ID":"32b69655-b895-4001-8f50-17a9d73056f9","Type":"ContainerStarted","Data":"5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.141722 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" event={"ID":"32b69655-b895-4001-8f50-17a9d73056f9","Type":"ContainerStarted","Data":"91af42078225515a6316b3c1e1e6f435c1a8f8500a0ceab74bdd46e39b894389"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.143094 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/1.log" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.143638 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/0.log" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.146462 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32" exitCode=1 Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.146499 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.146546 4826 scope.go:117] "RemoveContainer" containerID="3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.147507 4826 scope.go:117] "RemoveContainer" containerID="e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32" Jan 31 07:36:43 crc kubenswrapper[4826]: E0131 07:36:43.147726 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.156443 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.169733 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.194587 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:40Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:36:40.324549 6108 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:40.324570 6108 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:40.324597 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:36:40.324605 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:40.324765 6108 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:40.324815 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:40.324830 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:40.324837 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:40.324840 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:36:40.324855 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:40.324854 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:40.324874 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:40.324896 6108 factory.go:656] Stopping watch factory\\\\nI0131 07:36:40.324899 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:40.324916 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.196219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.196277 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.196301 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.196322 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.196335 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.207551 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.222120 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.233531 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.247578 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.261297 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.273882 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.284533 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.298136 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.298646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.298686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.298695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.298710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.298722 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.316168 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.326841 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.338344 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.354845 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.367159 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.379358 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.396262 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.401630 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.401681 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.401701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.401721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.401736 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.414844 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.432374 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.449688 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.463650 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.501291 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:40Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:36:40.324549 6108 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:40.324570 6108 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:40.324597 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:36:40.324605 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:40.324765 6108 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:40.324815 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:40.324830 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:40.324837 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:40.324840 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:36:40.324855 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:40.324854 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:40.324874 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:40.324896 6108 factory.go:656] Stopping watch factory\\\\nI0131 07:36:40.324899 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:40.324916 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.504342 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.504388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.504405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.504427 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.504443 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.516862 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.538494 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.555930 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.572556 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.592415 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.607772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.607836 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.607858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.607889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.607911 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.625837 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.642419 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.659436 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.679617 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:43Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.711021 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.711059 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.711072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.711089 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.711103 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.767745 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:32:36.109666809 +0000 UTC Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.808621 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:43 crc kubenswrapper[4826]: E0131 07:36:43.808765 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.813499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.813535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.813546 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.813561 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.813574 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.915963 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.916053 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.916072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.916100 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.916118 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:43Z","lastTransitionTime":"2026-01-31T07:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:43 crc kubenswrapper[4826]: I0131 07:36:43.997962 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qrw7j"] Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:43.998682 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:43.998778 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.014030 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.018831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.018884 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.018902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.018929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.018949 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.027187 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.038600 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.056564 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.071696 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.102444 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a52441f1fec1f477e0c704812d8f56cf91a8b76eff95fdc22d42b1b20453e14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:40Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:36:40.324549 6108 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:40.324570 6108 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:40.324597 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 07:36:40.324605 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:40.324765 6108 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:40.324815 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:40.324830 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:40.324837 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:40.324840 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 07:36:40.324855 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:40.324854 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:40.324874 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:40.324896 6108 factory.go:656] Stopping watch factory\\\\nI0131 07:36:40.324899 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:40.324916 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.103517 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8tx\" (UniqueName: \"kubernetes.io/projected/251ad51e-c383-4684-bfdb-2b9ce8098cc6-kube-api-access-gv8tx\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.103644 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.118120 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.122332 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.122388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.122406 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.122431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.122447 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.141375 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.153251 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/1.log" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.161102 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.179918 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.198380 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.204522 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8tx\" (UniqueName: \"kubernetes.io/projected/251ad51e-c383-4684-bfdb-2b9ce8098cc6-kube-api-access-gv8tx\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.204626 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.204790 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.204881 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:44.704857216 +0000 UTC m=+36.558743585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.217531 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.225110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.225158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.225170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.225189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.225201 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.231565 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8tx\" (UniqueName: \"kubernetes.io/projected/251ad51e-c383-4684-bfdb-2b9ce8098cc6-kube-api-access-gv8tx\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.239186 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.262062 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.276529 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.309682 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.327739 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.327794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.327813 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.327834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.327850 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.327826 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:44Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.430412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.430469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.430490 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.430514 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.430532 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.533138 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.533250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.533274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.533305 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.533328 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.608515 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.608760 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:37:00.60871955 +0000 UTC m=+52.462605949 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.636116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.636168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.636185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.636208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.636224 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.709706 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.709764 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.709817 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.709857 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.709904 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.709920 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710117 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710124 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710165 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710202 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710174 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710288 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710144 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710352 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710656 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:45.710114534 +0000 UTC m=+37.564000933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710717 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:00.71069964 +0000 UTC m=+52.564586039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710773 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:00.710729561 +0000 UTC m=+52.564615960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710819 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:00.710807733 +0000 UTC m=+52.564694132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.710842 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:00.710830264 +0000 UTC m=+52.564716663 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.739217 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.739287 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.739340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.739371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.739393 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.768936 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:45:36.822146508 +0000 UTC Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.809072 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.809095 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.809325 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:44 crc kubenswrapper[4826]: E0131 07:36:44.809399 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.843039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.843096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.843116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.843140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.843160 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.946439 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.946492 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.946504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.946523 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:44 crc kubenswrapper[4826]: I0131 07:36:44.946537 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:44Z","lastTransitionTime":"2026-01-31T07:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.048910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.048985 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.049006 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.049033 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.049050 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.152085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.152184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.152205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.152233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.152253 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.255893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.256354 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.256366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.256383 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.256395 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.271534 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.272589 4826 scope.go:117] "RemoveContainer" containerID="e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.272938 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.291410 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.308802 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.323619 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.343659 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.359204 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.361071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.361125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.361149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.361179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.361202 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.391081 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.395528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.395578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.395596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.395617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.395636 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.404603 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.409211 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.413179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.413224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.413237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.413255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.413268 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.418610 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.426643 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.431465 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.431503 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.431516 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.431532 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.431544 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.435191 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.444947 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.450107 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.450164 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.450180 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.450206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.450218 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.458409 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.467508 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.471617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.471667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.471678 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.471696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.471710 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.471931 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.485419 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.489210 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.489425 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.491761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.491804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.491821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.491844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.491860 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.511947 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.522287 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.541363 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.561100 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.579158 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:45Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.595068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.595129 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.595148 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.595171 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.595189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.698314 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.698361 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.698369 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.698385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.698396 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.721825 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.721954 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.722103 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:47.722089209 +0000 UTC m=+39.575975568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.769578 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:30:13.996042657 +0000 UTC Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.802015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.802147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.802187 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.802228 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.802252 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.808707 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.808726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.808903 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:45 crc kubenswrapper[4826]: E0131 07:36:45.809066 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.905822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.905897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.905919 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.905951 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:45 crc kubenswrapper[4826]: I0131 07:36:45.906012 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:45Z","lastTransitionTime":"2026-01-31T07:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.009348 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.009451 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.009473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.009531 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.009554 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.113163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.113231 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.113248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.113274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.113292 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.215687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.215775 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.216044 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.216074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.216392 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.319504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.319568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.319585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.319609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.319627 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.422244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.422286 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.422302 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.422318 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.422331 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.524874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.524910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.524919 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.524933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.524940 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.627227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.627274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.627282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.627297 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.627308 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.730907 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.731038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.731058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.731098 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.731118 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.769707 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:54:02.787664376 +0000 UTC Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.808142 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:46 crc kubenswrapper[4826]: E0131 07:36:46.808348 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.808401 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:46 crc kubenswrapper[4826]: E0131 07:36:46.808709 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.833915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.834041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.834074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.834106 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.834133 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.941179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.941256 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.941288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.941317 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:46 crc kubenswrapper[4826]: I0131 07:36:46.941340 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:46Z","lastTransitionTime":"2026-01-31T07:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.044450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.044511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.044523 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.044542 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.044559 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.147661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.147731 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.147755 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.147783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.147805 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.250368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.250443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.250466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.250488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.250505 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.353754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.353809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.353827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.353852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.353870 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.457321 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.457393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.457415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.457506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.457533 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.561288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.561362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.561400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.561434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.561459 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.664404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.664453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.664524 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.664549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.664562 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.743752 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:47 crc kubenswrapper[4826]: E0131 07:36:47.744035 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:47 crc kubenswrapper[4826]: E0131 07:36:47.744123 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:51.744094508 +0000 UTC m=+43.597980897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.767667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.767765 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.767783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.767806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.767823 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.770839 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:25:47.453197875 +0000 UTC Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.808129 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.808200 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:47 crc kubenswrapper[4826]: E0131 07:36:47.808250 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:47 crc kubenswrapper[4826]: E0131 07:36:47.808329 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.857400 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.869576 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.869636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.869649 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.869662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.869672 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.874537 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.897516 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.911534 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.924117 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.942452 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.962083 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.972730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.972797 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.972814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.972837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.972855 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:47Z","lastTransitionTime":"2026-01-31T07:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.977587 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:47 crc kubenswrapper[4826]: I0131 07:36:47.996317 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.010505 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.020787 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.040584 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.054925 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.073549 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.075702 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.075730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.075742 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.075758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.075770 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.091537 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.107294 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.128330 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.147128 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.178497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.178538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.178548 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.178567 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.178584 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.281142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.281193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.281211 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.281233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.281252 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.384861 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.384928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.385001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.385035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.385059 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.487736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.487773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.487785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.487800 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.487812 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.590240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.590283 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.590291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.590304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.590313 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.693515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.693569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.693598 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.693623 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.693642 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.771406 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:45:36.466250221 +0000 UTC Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.797377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.797422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.797433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.797449 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.797458 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.807942 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.808039 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:48 crc kubenswrapper[4826]: E0131 07:36:48.808116 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:48 crc kubenswrapper[4826]: E0131 07:36:48.808316 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.828333 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.843525 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.862733 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.881688 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.897959 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.899433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.899467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.899475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.899514 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.899530 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:48Z","lastTransitionTime":"2026-01-31T07:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.913917 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.935879 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.949566 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.979352 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:48 crc kubenswrapper[4826]: I0131 07:36:48.995304 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.002040 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.002085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.002105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.002127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.002140 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.011331 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.029578 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.047751 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.062538 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.091404 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.105146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.105235 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.105248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.105267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.105279 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.105441 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.122686 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.208671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.208733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.208750 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.208774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.208791 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.312319 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.312399 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.312423 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.312453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.312474 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.415586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.415652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.415669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.415692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.415707 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.518824 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.518877 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.518889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.518907 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.518919 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.622000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.622081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.622110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.622145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.622169 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.724952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.725052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.725071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.725096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.725112 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.772791 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:31:01.398366379 +0000 UTC Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.808724 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.808814 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:49 crc kubenswrapper[4826]: E0131 07:36:49.808890 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:49 crc kubenswrapper[4826]: E0131 07:36:49.809008 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.828187 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.828299 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.828328 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.828396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.828414 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.932478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.932545 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.932565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.932590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:49 crc kubenswrapper[4826]: I0131 07:36:49.932608 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:49Z","lastTransitionTime":"2026-01-31T07:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.036217 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.036275 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.036293 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.036319 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.036335 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.155927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.155956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.155979 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.155991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.156000 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.258184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.258253 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.258270 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.258293 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.258309 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.361099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.361179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.361208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.361239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.361264 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.464121 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.464206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.464240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.464271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.464296 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.566837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.566886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.566906 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.566929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.566947 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.671043 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.671096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.671112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.671133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.671144 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.773247 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:24:33.366235761 +0000 UTC Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.774017 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.774057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.774071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.774088 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.774101 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.809021 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.809063 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:50 crc kubenswrapper[4826]: E0131 07:36:50.809199 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:50 crc kubenswrapper[4826]: E0131 07:36:50.809325 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.877312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.877377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.877395 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.877418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.877435 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.981681 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.981748 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.981766 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.981800 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:50 crc kubenswrapper[4826]: I0131 07:36:50.981818 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:50Z","lastTransitionTime":"2026-01-31T07:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.084464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.084517 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.084532 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.084551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.084565 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.187435 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.187501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.187519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.187540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.187557 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.290539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.290601 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.290620 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.290642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.290661 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.393109 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.393217 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.393240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.393273 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.393296 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.499266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.499497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.499528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.499557 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.499582 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.602726 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.602783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.602801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.602827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.602850 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.705627 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.705690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.705714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.705741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.705760 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.773829 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:14:13.768558458 +0000 UTC Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.790529 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:51 crc kubenswrapper[4826]: E0131 07:36:51.790694 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:51 crc kubenswrapper[4826]: E0131 07:36:51.790777 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:36:59.79075306 +0000 UTC m=+51.644639449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808302 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808321 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:51 crc kubenswrapper[4826]: E0131 07:36:51.808466 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:51 crc kubenswrapper[4826]: E0131 07:36:51.808616 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808650 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.808711 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.910949 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.910994 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.911003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.911015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:51 crc kubenswrapper[4826]: I0131 07:36:51.911025 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:51Z","lastTransitionTime":"2026-01-31T07:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.013891 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.013952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.013991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.014010 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.014023 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.117289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.117350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.117362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.117383 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.117398 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.220142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.220213 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.220236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.220267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.220295 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.324057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.324112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.324125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.324143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.324155 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.427048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.427106 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.427125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.427149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.427169 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.533078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.533141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.533158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.533181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.533197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.636612 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.636658 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.636683 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.636708 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.636727 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.740592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.740647 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.740666 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.740690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.740710 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.774206 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:29:33.508269387 +0000 UTC Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.809273 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.809337 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:52 crc kubenswrapper[4826]: E0131 07:36:52.809499 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:52 crc kubenswrapper[4826]: E0131 07:36:52.809602 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.842725 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.842781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.842803 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.842831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.842852 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.945351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.945392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.945404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.945420 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:52 crc kubenswrapper[4826]: I0131 07:36:52.945432 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:52Z","lastTransitionTime":"2026-01-31T07:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.049222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.049295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.049310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.049336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.049353 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.152540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.152619 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.152641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.152672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.152693 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.256323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.256380 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.256394 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.256417 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.256432 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.360445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.360496 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.360508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.360527 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.360541 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.463587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.463638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.463652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.463671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.463683 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.566959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.567072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.567097 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.567129 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.567151 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.670434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.670480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.670491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.670507 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.670519 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.773720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.773791 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.773803 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.773822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.773834 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.774318 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:59:58.798805966 +0000 UTC Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.807941 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:53 crc kubenswrapper[4826]: E0131 07:36:53.808145 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.807941 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:53 crc kubenswrapper[4826]: E0131 07:36:53.808271 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.877450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.877520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.877539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.877563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.877583 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.981063 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.981126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.981146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.981172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:53 crc kubenswrapper[4826]: I0131 07:36:53.981191 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:53Z","lastTransitionTime":"2026-01-31T07:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.084708 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.084784 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.084802 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.084833 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.084853 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.188504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.188569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.188591 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.188621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.188644 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.290854 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.290904 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.290916 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.290933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.290949 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.393937 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.394025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.394037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.394056 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.394093 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.497008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.497071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.497081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.497095 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.497104 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.599396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.599465 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.599479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.599497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.599509 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.702654 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.702719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.702733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.702769 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.702785 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.775133 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:04:28.063225772 +0000 UTC Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.805561 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.805624 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.805642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.805666 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.805686 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.808880 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.808924 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:54 crc kubenswrapper[4826]: E0131 07:36:54.809234 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:54 crc kubenswrapper[4826]: E0131 07:36:54.809379 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.908476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.908533 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.908546 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.908563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:54 crc kubenswrapper[4826]: I0131 07:36:54.908575 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:54Z","lastTransitionTime":"2026-01-31T07:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.012918 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.013030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.013055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.013086 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.013118 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.124763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.124839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.124862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.124894 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.124916 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.227685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.227733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.227750 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.227773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.227790 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.330398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.330459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.330490 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.330513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.330529 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.433170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.433237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.433257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.433285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.433307 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.536176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.536211 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.536221 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.536236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.536246 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.638799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.638865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.638876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.638893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.638906 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.676402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.676516 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.676543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.676570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.676591 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.697353 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:55Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.701962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.702055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.702077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.702109 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.702136 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.723515 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:55Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.727645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.727691 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.727703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.727721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.727734 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.746124 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:55Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.751694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.751727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.751738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.751773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.751787 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.769991 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:55Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.774878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.774929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.774941 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.774960 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.774992 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.775607 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:51:18.449088815 +0000 UTC Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.787118 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:55Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.787229 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.788806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.788869 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.788880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.788893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.788923 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.808274 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.808332 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.808377 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:55 crc kubenswrapper[4826]: E0131 07:36:55.808442 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.891246 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.891307 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.891324 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.891351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.891369 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.995220 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.995293 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.995319 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.995353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:55 crc kubenswrapper[4826]: I0131 07:36:55.995371 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:55Z","lastTransitionTime":"2026-01-31T07:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.098893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.098953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.099026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.099066 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.099086 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.201594 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.201667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.201688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.201715 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.201733 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.304764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.304840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.304862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.304885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.304903 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.408032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.408085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.408101 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.408128 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.408142 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.510630 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.510701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.510725 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.510753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.510778 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.615862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.615915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.615925 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.615938 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.615951 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.718582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.718659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.718683 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.718712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.718734 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.776126 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:34:29.780056796 +0000 UTC Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.808531 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:56 crc kubenswrapper[4826]: E0131 07:36:56.808701 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.809219 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:56 crc kubenswrapper[4826]: E0131 07:36:56.809299 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.821024 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.821073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.821088 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.821104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.821115 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.922906 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.923015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.923041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.923069 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:56 crc kubenswrapper[4826]: I0131 07:36:56.923091 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:56Z","lastTransitionTime":"2026-01-31T07:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.025865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.025924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.025944 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.025994 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.026011 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.128322 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.128390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.128428 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.128456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.128477 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.231112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.231190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.231215 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.231245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.231266 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.338621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.338712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.338744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.338771 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.338792 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.441727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.441819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.441843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.441873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.441894 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.545218 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.545302 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.545327 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.545356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.545373 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.648110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.648191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.648213 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.648237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.648254 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.751227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.751289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.751306 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.751331 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.751350 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.776275 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:26:43.889864319 +0000 UTC Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.808633 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.808760 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:57 crc kubenswrapper[4826]: E0131 07:36:57.808799 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:57 crc kubenswrapper[4826]: E0131 07:36:57.809034 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.854344 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.854398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.854415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.854439 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.854461 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.957335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.957381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.957395 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.957411 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:57 crc kubenswrapper[4826]: I0131 07:36:57.957422 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:57Z","lastTransitionTime":"2026-01-31T07:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.060473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.060538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.060555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.060577 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.060595 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.163158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.163225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.163244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.163267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.163286 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.266125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.266177 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.266190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.266206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.266218 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.369548 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.369714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.369732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.369757 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.369775 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.472721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.472854 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.472919 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.472953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.473029 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.574772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.574823 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.574834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.574852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.574865 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.678676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.678745 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.678763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.678786 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.678806 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.776566 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:28:21.667790395 +0000 UTC Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.781844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.781886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.781897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.781914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.781926 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.808609 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:36:58 crc kubenswrapper[4826]: E0131 07:36:58.809249 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.809415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.811259 4826 scope.go:117] "RemoveContainer" containerID="e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32" Jan 31 07:36:58 crc kubenswrapper[4826]: E0131 07:36:58.811479 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.831222 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.831617 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.843726 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.849463 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.868537 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.886055 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.886399 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.886452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.886472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.886493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.886510 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.901453 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.915199 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.937805 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.953156 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.970938 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.986455 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.989906 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.990165 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.990206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.990225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:58 crc kubenswrapper[4826]: I0131 07:36:58.990240 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:58Z","lastTransitionTime":"2026-01-31T07:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.000483 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:58Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.017607 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.041839 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.058837 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.075085 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.093823 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.093863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.093878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.093893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.093905 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.093905 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.108770 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.124881 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.143912 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.164419 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.187595 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.196031 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.196067 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.196077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.196090 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.196098 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.209400 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/1.log" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.211722 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.212275 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.212854 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.231227 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.255692 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.285009 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.298744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.298791 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.298803 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.298817 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.298827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.299695 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.317529 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.328451 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.345149 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.354173 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.367128 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.379115 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.391983 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.401897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.401947 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.401960 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.401998 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.402011 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.406104 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.417360 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.429404 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.450206 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.459529 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.475926 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.488898 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.500592 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.504644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.504737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.504757 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.504781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.504799 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.513127 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.525427 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.542919 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.556156 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.570600 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.584166 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.607333 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.607388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.607426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.607445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.607456 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.610884 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.634069 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.645955 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.669292 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.688370 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.702420 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:36:59Z is after 2025-08-24T17:21:41Z" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.710513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.710578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.710595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.710621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.710638 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.777177 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:12:36.801952392 +0000 UTC Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.808020 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.808146 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:36:59 crc kubenswrapper[4826]: E0131 07:36:59.808201 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:36:59 crc kubenswrapper[4826]: E0131 07:36:59.808406 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.815210 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.815281 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.815297 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.815322 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.815338 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.878155 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:36:59 crc kubenswrapper[4826]: E0131 07:36:59.878399 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:59 crc kubenswrapper[4826]: E0131 07:36:59.878502 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:15.878475891 +0000 UTC m=+67.732362290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.918304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.918351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.918366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.918389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:36:59 crc kubenswrapper[4826]: I0131 07:36:59.918402 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:36:59Z","lastTransitionTime":"2026-01-31T07:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.020519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.020581 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.020598 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.020625 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.020644 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.123309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.123372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.123388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.123415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.123432 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.218055 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/2.log" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.219130 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/1.log" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.223005 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f" exitCode=1 Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.223070 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.223127 4826 scope.go:117] "RemoveContainer" containerID="e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.224245 4826 scope.go:117] "RemoveContainer" containerID="ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f" Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.224566 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.225416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.225453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.225469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.225518 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.225537 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.242302 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.259010 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.276942 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.287892 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.304115 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.313944 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.328324 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.328366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.328375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.328390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.328404 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.335651 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.350235 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.365390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.378790 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.390186 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.402340 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.412123 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.430881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.430922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.430940 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.430988 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.431005 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.439452 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9582c6ad791fee2ee8c393be81fbdbd3d199c4e6b209271220ca44dd3486e32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"message\\\":\\\"ransact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 07:36:42.033675 6240 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035113 6240 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.035129 6240 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5fm7w in node crc\\\\nI0131 07:36:42.035141 6240 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5fm7w after 0 failed attempt(s)\\\\nI0131 07:36:42.035152 6240 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-5fm7w\\\\nI0131 07:36:42.0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.453178 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.469381 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.483766 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.502379 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:00Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.533729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.533774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.533784 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.533800 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.533814 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.636271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.636326 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.636337 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.636353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.636369 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.690140 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.690401 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:37:32.690368219 +0000 UTC m=+84.544254588 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.739088 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.739140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.739158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.739182 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.739199 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.778211 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:05:14.952464982 +0000 UTC Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.791153 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.791285 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.791364 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.791400 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.791510 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.791582 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:32.791557957 +0000 UTC m=+84.645444346 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.791879 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.791906 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.791924 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792020 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:32.791953908 +0000 UTC m=+84.645840297 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792678 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792705 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792734 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792747 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792788 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:32.792759051 +0000 UTC m=+84.646645450 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.792815 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:32.792803632 +0000 UTC m=+84.646690021 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.808457 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.808497 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.808616 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:00 crc kubenswrapper[4826]: E0131 07:37:00.808780 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.841911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.842003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.842025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.842070 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.842097 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.944805 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.944862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.944879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.944903 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:00 crc kubenswrapper[4826]: I0131 07:37:00.944923 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:00Z","lastTransitionTime":"2026-01-31T07:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.047930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.048071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.048095 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.048155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.048178 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.151898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.151952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.151981 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.152003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.152015 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.226983 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/2.log" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.230480 4826 scope.go:117] "RemoveContainer" containerID="ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f" Jan 31 07:37:01 crc kubenswrapper[4826]: E0131 07:37:01.230668 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.242004 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.253862 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.254794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.254843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.254863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.254886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.254951 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.269325 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.281368 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.295616 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.308478 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.336174 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.348113 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.356957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.357006 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.357015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.357029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.357038 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.365111 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.376855 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.389502 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.401575 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.415810 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.431636 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.444039 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.461076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.461134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.461146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.461163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.461180 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.464727 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.476715 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.508925 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:01Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.564219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.564274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.564292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.564315 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.564367 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.667511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.667573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.667590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.667616 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.667634 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.770505 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.770575 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.770593 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.770623 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.770648 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.778994 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:58:11.420638572 +0000 UTC Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.808609 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.808644 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:01 crc kubenswrapper[4826]: E0131 07:37:01.808785 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:01 crc kubenswrapper[4826]: E0131 07:37:01.808902 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.874295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.874340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.874353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.874369 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.874382 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.977696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.977766 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.977782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.977806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:01 crc kubenswrapper[4826]: I0131 07:37:01.977824 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:01Z","lastTransitionTime":"2026-01-31T07:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.080650 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.080702 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.080723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.080746 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.080764 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.183558 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.183643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.183695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.183721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.183743 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.286525 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.286587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.286605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.286628 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.286646 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.390297 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.390399 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.390418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.390446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.390462 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.493232 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.493291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.493310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.493341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.493365 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.596865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.596922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.596939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.596964 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.597013 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.699665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.699731 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.699751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.699774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.699805 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.779541 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:08:38.106163706 +0000 UTC Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.802681 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.802744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.802761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.802785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.802802 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.808179 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.808199 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:02 crc kubenswrapper[4826]: E0131 07:37:02.808365 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:02 crc kubenswrapper[4826]: E0131 07:37:02.808474 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.906843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.906932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.907021 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.907085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:02 crc kubenswrapper[4826]: I0131 07:37:02.907113 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:02Z","lastTransitionTime":"2026-01-31T07:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.009809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.009870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.009881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.009895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.009905 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.113326 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.113436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.113489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.113515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.113533 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.216728 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.216824 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.216850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.216883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.216908 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.323520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.323575 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.323584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.323599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.323609 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.425940 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.426060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.426078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.426101 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.426119 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.529687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.529764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.529789 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.529819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.529839 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.633002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.633062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.633079 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.633102 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.633121 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.737042 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.737114 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.737133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.737156 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.737173 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.780041 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:15:08.482771683 +0000 UTC Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.808660 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.808660 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:03 crc kubenswrapper[4826]: E0131 07:37:03.808852 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:03 crc kubenswrapper[4826]: E0131 07:37:03.808912 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.840284 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.840347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.840364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.840390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.840411 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.944116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.944171 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.944188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.944211 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:03 crc kubenswrapper[4826]: I0131 07:37:03.944230 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:03Z","lastTransitionTime":"2026-01-31T07:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.048112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.048185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.048210 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.048240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.048263 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.152136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.152205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.152224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.152248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.152270 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.254740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.254818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.254863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.254892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.254914 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.357602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.357676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.357701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.357736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.357761 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.461636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.461682 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.461695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.461713 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.461728 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.565137 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.565176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.565188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.565206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.565216 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.668353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.668387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.668396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.668409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.668417 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.772238 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.772271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.772280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.772294 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.772303 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.780730 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:44:33.517462525 +0000 UTC Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.808480 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.808498 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:04 crc kubenswrapper[4826]: E0131 07:37:04.808725 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:04 crc kubenswrapper[4826]: E0131 07:37:04.808873 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.875523 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.875586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.875606 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.875631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.875651 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.979581 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.979630 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.979642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.979661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:04 crc kubenswrapper[4826]: I0131 07:37:04.979674 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:04Z","lastTransitionTime":"2026-01-31T07:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.082361 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.082424 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.082442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.082744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.082796 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.185069 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.185121 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.185132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.185148 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.185164 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.287423 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.287479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.287491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.287512 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.287525 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.389932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.390037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.390058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.390082 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.390099 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.492296 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.492359 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.492376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.492398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.492415 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.595525 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.595579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.595594 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.595615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.595629 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.698697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.698817 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.698834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.698857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.698879 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.781699 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:19:43.584867132 +0000 UTC Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.788963 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.789025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.789033 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.789048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.789058 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.800959 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:05Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808022 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.808165 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808034 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.808403 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.808885 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.825155 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:05Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.829146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.829196 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.829208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.829223 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.829232 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.842701 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:05Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.846117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.846212 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.846227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.846244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.846255 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.857318 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:05Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.860388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.860431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.860449 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.860464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.860474 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.870938 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:05Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:05 crc kubenswrapper[4826]: E0131 07:37:05.871085 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.872937 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.873042 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.873062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.873085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.873102 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.976303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.976377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.976401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.976430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:05 crc kubenswrapper[4826]: I0131 07:37:05.976452 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:05Z","lastTransitionTime":"2026-01-31T07:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.079685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.079746 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.079759 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.079777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.079790 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.182915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.183021 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.183030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.183044 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.183055 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.286376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.286420 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.286432 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.286448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.286459 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.389729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.389776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.389787 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.389801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.389812 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.492860 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.492909 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.492930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.492958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.493010 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.595703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.595757 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.595775 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.595798 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.595815 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.699080 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.699124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.699133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.699146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.699154 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.782219 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:59:28.727638629 +0000 UTC Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.801369 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.801404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.801443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.801460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.801471 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.808875 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:06 crc kubenswrapper[4826]: E0131 07:37:06.809005 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.808877 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:06 crc kubenswrapper[4826]: E0131 07:37:06.809085 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.904543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.904611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.904629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.904652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:06 crc kubenswrapper[4826]: I0131 07:37:06.904669 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:06Z","lastTransitionTime":"2026-01-31T07:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.007731 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.007783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.007801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.007837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.007871 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.110916 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.111026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.111054 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.111083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.111105 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.214247 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.214587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.214795 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.214957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.215158 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.318170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.318231 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.318431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.318452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.318471 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.421858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.422009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.422058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.422094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.422119 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.525059 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.525128 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.525151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.525182 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.525209 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.627919 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.628016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.628060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.628096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.628118 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.730536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.730662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.730688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.730719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.730743 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.783305 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:18:27.698963439 +0000 UTC Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.807897 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.807933 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:07 crc kubenswrapper[4826]: E0131 07:37:07.808064 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:07 crc kubenswrapper[4826]: E0131 07:37:07.808168 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.833484 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.833536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.833548 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.833565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.833578 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.935843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.935889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.935899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.935914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:07 crc kubenswrapper[4826]: I0131 07:37:07.935926 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:07Z","lastTransitionTime":"2026-01-31T07:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.038382 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.038448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.038504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.038538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.038558 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:08Z","lastTransitionTime":"2026-01-31T07:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.677589 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.677656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.677673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.677701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.677720 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:08Z","lastTransitionTime":"2026-01-31T07:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.781237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.781323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.781347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.781382 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.781408 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:08Z","lastTransitionTime":"2026-01-31T07:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.784623 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:13:49.804579447 +0000 UTC Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.809094 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.809322 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:08 crc kubenswrapper[4826]: E0131 07:37:08.809612 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:08 crc kubenswrapper[4826]: E0131 07:37:08.809817 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.828258 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.855406 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.873372 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.885539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.885855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.885935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.886051 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.886134 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:08Z","lastTransitionTime":"2026-01-31T07:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.890516 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.919564 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.940669 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.956381 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.972169 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.985810 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.989500 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.989551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.989565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.989611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.989625 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:08Z","lastTransitionTime":"2026-01-31T07:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:08 crc kubenswrapper[4826]: I0131 07:37:08.999698 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:08Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.016301 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.040751 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.056616 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.081675 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.093096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.093152 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.093173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.093202 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.093225 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.101320 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.123011 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.141143 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.158216 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:09Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.196072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.196131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.196151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.196180 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.196200 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.298978 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.299008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.299016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.299029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.299038 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.401285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.401321 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.401332 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.401349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.401359 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.503563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.503605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.503617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.503633 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.503645 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.606793 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.607096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.607226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.607312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.607414 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.711039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.711143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.711168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.711214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.711237 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.785201 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:08:20.190540305 +0000 UTC Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.808137 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.808169 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:09 crc kubenswrapper[4826]: E0131 07:37:09.808385 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:09 crc kubenswrapper[4826]: E0131 07:37:09.808593 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.815246 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.815300 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.815321 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.815349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.815370 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.918736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.919208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.919301 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.919410 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:09 crc kubenswrapper[4826]: I0131 07:37:09.919520 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:09Z","lastTransitionTime":"2026-01-31T07:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.023587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.023658 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.023679 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.023706 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.023725 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.127414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.127492 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.127511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.127543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.127562 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.230573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.230639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.230657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.230695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.230718 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.332944 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.333051 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.333076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.333104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.333128 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.437692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.437771 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.437792 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.437822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.437843 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.540576 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.540684 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.540701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.540734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.540756 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.647000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.647116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.647133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.647158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.647176 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.750060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.750128 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.750149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.750173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.750189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.785540 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:35:34.836902193 +0000 UTC Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.808281 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.808346 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:10 crc kubenswrapper[4826]: E0131 07:37:10.808431 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:10 crc kubenswrapper[4826]: E0131 07:37:10.808570 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.853991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.854032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.854041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.854058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.854069 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.957862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.958009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.958038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.958068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:10 crc kubenswrapper[4826]: I0131 07:37:10.958090 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:10Z","lastTransitionTime":"2026-01-31T07:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.060205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.060242 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.060253 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.060269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.060280 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.162665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.162692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.162699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.162711 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.162719 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.264734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.264767 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.264780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.264797 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.264810 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.368074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.368195 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.368219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.368248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.368269 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.471188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.471237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.471253 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.471275 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.471293 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.574323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.574381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.574396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.574411 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.574423 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.677094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.677184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.677203 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.677226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.677243 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.784351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.784397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.784408 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.784425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.784458 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.786590 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:01:08.208612377 +0000 UTC Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.808370 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:11 crc kubenswrapper[4826]: E0131 07:37:11.808528 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.808836 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:11 crc kubenswrapper[4826]: E0131 07:37:11.808949 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.887724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.887805 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.887827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.887858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.887888 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.991083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.991151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.991174 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.991201 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:11 crc kubenswrapper[4826]: I0131 07:37:11.991224 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:11Z","lastTransitionTime":"2026-01-31T07:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.093480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.093548 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.093566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.093590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.093609 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.196368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.196442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.196482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.196511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.196532 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.298554 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.298599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.298609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.298634 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.298645 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.401310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.401387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.401405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.401431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.401451 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.504790 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.504840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.504857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.504878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.504894 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.608106 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.608149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.608160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.608177 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.608189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.711193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.711366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.711386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.711464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.711486 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.787172 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:38:10.093874273 +0000 UTC Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.808688 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:12 crc kubenswrapper[4826]: E0131 07:37:12.808846 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.809120 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:12 crc kubenswrapper[4826]: E0131 07:37:12.809183 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.813909 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.813946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.813954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.813983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.813994 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.917565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.917651 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.917673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.917701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:12 crc kubenswrapper[4826]: I0131 07:37:12.917723 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:12Z","lastTransitionTime":"2026-01-31T07:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.021498 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.021560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.021617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.021645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.021667 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.123637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.123678 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.123687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.123701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.123710 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.226030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.226087 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.226107 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.226132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.226149 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.328376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.328422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.328433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.328450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.328462 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.431101 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.431149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.431163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.431182 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.431194 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.533882 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.533945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.533961 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.534003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.534021 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.636618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.636679 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.636703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.636732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.636755 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.739736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.739784 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.739794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.739812 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.739823 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.787477 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:39:47.36921361 +0000 UTC Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.807921 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.808092 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:13 crc kubenswrapper[4826]: E0131 07:37:13.808103 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:13 crc kubenswrapper[4826]: E0131 07:37:13.808324 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.843084 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.843130 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.843145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.843166 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.843180 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.945881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.945946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.945957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.945991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:13 crc kubenswrapper[4826]: I0131 07:37:13.946006 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:13Z","lastTransitionTime":"2026-01-31T07:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.047673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.047712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.047723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.047739 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.047749 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.150143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.150220 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.150241 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.150270 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.150290 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.252198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.252260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.252277 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.252297 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.252313 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.354920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.354956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.354978 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.354992 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.355001 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.457242 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.457309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.457321 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.457338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.457349 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.559673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.559725 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.559741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.559762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.559778 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.663441 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.663506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.663528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.663560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.663582 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.765958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.766023 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.766033 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.766046 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.766055 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.788493 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:38:42.080917206 +0000 UTC Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.808001 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.808015 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:14 crc kubenswrapper[4826]: E0131 07:37:14.808155 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:14 crc kubenswrapper[4826]: E0131 07:37:14.808233 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.868495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.868540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.868548 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.868561 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.868571 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.971121 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.971189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.971200 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.971216 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:14 crc kubenswrapper[4826]: I0131 07:37:14.971225 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:14Z","lastTransitionTime":"2026-01-31T07:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.073696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.073754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.073764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.073782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.073793 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.176156 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.176187 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.176197 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.176216 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.176233 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.278047 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.278099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.278111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.278129 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.278143 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.380674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.380741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.380758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.380781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.380798 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.483480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.483566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.483592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.483621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.483642 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.586939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.587034 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.587057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.587087 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.587111 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.689995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.690029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.690039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.690052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.690062 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.789606 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:25:23.298314221 +0000 UTC Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.792570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.792603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.792614 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.792629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.792640 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.807931 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.808095 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:15 crc kubenswrapper[4826]: E0131 07:37:15.808169 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:15 crc kubenswrapper[4826]: E0131 07:37:15.808307 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.894853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.894896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.894913 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.894934 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.894950 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.966249 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:15 crc kubenswrapper[4826]: E0131 07:37:15.966374 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:37:15 crc kubenswrapper[4826]: E0131 07:37:15.966421 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:37:47.966407711 +0000 UTC m=+99.820294070 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.996900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.996922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.996931 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.996941 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:15 crc kubenswrapper[4826]: I0131 07:37:15.996949 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:15Z","lastTransitionTime":"2026-01-31T07:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.099230 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.099280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.099292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.099312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.099326 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.141291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.141349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.141371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.141399 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.141422 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.161017 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:16Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.166037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.166093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.166111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.166135 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.166154 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.182058 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:16Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.186718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.186777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.186786 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.186802 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.186813 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.203713 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:16Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.208798 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.208872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.208883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.208897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.208908 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.226776 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:16Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.232393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.232443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.232460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.232485 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.232574 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.249814 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:16Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.250076 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.251909 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.252056 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.252127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.252160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.252239 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.356644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.356739 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.356761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.356827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.356848 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.459352 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.459397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.459409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.459426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.459437 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.561911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.561998 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.562010 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.562026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.562036 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.664120 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.664197 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.664220 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.664248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.664267 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.767517 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.767572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.767762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.767783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.767800 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.790104 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:20:22.342473271 +0000 UTC Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.808542 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.808767 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.808929 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.809637 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.810077 4826 scope.go:117] "RemoveContainer" containerID="ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f" Jan 31 07:37:16 crc kubenswrapper[4826]: E0131 07:37:16.810448 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.870487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.870526 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.870537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.870553 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.870564 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.973034 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.973085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.973102 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.973154 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:16 crc kubenswrapper[4826]: I0131 07:37:16.973173 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:16Z","lastTransitionTime":"2026-01-31T07:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.075568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.075626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.075646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.075671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.075688 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.178130 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.178174 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.178183 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.178198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.178207 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.280513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.280578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.280587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.280601 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.280611 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.382867 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.382909 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.382918 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.382930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.382940 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.485193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.485229 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.485255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.485268 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.485279 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.586751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.586818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.586844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.586874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.586898 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.688954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.689005 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.689013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.689026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.689041 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.790242 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:52:16.545427919 +0000 UTC Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.791411 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.791451 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.791462 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.791481 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.791495 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.808123 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.808193 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:17 crc kubenswrapper[4826]: E0131 07:37:17.808276 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:17 crc kubenswrapper[4826]: E0131 07:37:17.808327 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.894049 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.894094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.894105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.894123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.894134 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.996528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.996573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.996585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.996603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:17 crc kubenswrapper[4826]: I0131 07:37:17.996615 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:17Z","lastTransitionTime":"2026-01-31T07:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.099272 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.099328 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.099340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.099375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.099388 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.202222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.202257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.202265 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.202280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.202288 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.287585 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/0.log" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.287889 4826 generic.go:334] "Generic (PLEG): container finished" podID="b672fd90-a70c-4f27-b711-e58f269efccd" containerID="3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e" exitCode=1 Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.287933 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerDied","Data":"3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.288554 4826 scope.go:117] "RemoveContainer" containerID="3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.304220 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.304248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.304257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.304271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.304282 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.310390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.326520 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.336533 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.351658 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.362457 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.374403 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.385778 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.402781 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.406503 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.406528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.406537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.406551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.406559 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.414657 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.426293 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.441192 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.454750 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.482884 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.492536 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.508125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.508235 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.508298 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.508366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.508424 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.511474 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.526535 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.541143 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.556283 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.610443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.610520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.610538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.610564 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.610581 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.712660 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.712799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.712889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.713009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.713097 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.790640 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:12:47.232366592 +0000 UTC Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.808238 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.808262 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:18 crc kubenswrapper[4826]: E0131 07:37:18.808339 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:18 crc kubenswrapper[4826]: E0131 07:37:18.808521 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.815185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.815350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.815465 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.815567 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.815646 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.821070 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.833051 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.842902 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.860945 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.879704 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.912123 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.917227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.917264 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.917274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.917289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.917300 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:18Z","lastTransitionTime":"2026-01-31T07:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.924645 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.933666 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.950513 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.964368 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.978862 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:18 crc kubenswrapper[4826]: I0131 07:37:18.991345 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:18Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.004769 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.020105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.020151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.020161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.020178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.020189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.021849 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.032123 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.041854 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.053740 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.063145 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.122430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.122469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.122488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.122504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.122514 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.224914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.224996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.225012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.225029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.225041 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.293374 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/0.log" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.293479 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerStarted","Data":"0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.306364 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.316850 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.327550 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.327605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.327617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.327636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.327647 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.328018 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.337209 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.345836 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.366010 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.375894 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.386711 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.396861 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.408470 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.418780 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.429655 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.429697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.429709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.429726 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.429740 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.431847 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.447747 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.458024 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.470239 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.480998 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.500028 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.510943 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:19Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.532024 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.532057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.532065 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.532078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.532088 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.635459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.635498 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.635508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.635521 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.635528 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.738061 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.738127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.738151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.738178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.738199 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.791698 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:43:04.661542817 +0000 UTC Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.808126 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.808185 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:19 crc kubenswrapper[4826]: E0131 07:37:19.808297 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:19 crc kubenswrapper[4826]: E0131 07:37:19.808385 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.840583 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.840646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.840663 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.840685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.840700 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.943483 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.943558 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.943578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.943604 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:19 crc kubenswrapper[4826]: I0131 07:37:19.943623 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:19Z","lastTransitionTime":"2026-01-31T07:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.046558 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.046625 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.046643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.046662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.046677 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.149701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.149774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.149794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.149820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.149838 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.252533 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.252589 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.252600 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.252617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.252628 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.354700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.354740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.354750 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.354766 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.354776 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.457107 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.457161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.457173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.457195 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.457207 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.559511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.559541 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.559550 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.559563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.559571 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.661417 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.661453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.661464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.661479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.661490 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.763985 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.764023 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.764035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.764050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.764061 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.792774 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:02:02.124599736 +0000 UTC Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.808052 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.808105 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:20 crc kubenswrapper[4826]: E0131 07:37:20.808169 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:20 crc kubenswrapper[4826]: E0131 07:37:20.808266 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.867009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.867058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.867071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.867087 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.867098 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.970656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.970761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.970783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.970806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:20 crc kubenswrapper[4826]: I0131 07:37:20.970823 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:20Z","lastTransitionTime":"2026-01-31T07:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.073757 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.073821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.073844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.073873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.073895 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.175751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.175793 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.175804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.175818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.175833 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.278304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.278357 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.278368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.278384 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.278395 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.381643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.381703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.381720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.381743 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.381763 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.483582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.483626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.483641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.483657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.483669 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.585641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.585690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.585699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.585714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.585724 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.687830 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.687870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.687879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.687901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.687912 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.790841 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.790893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.790910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.790935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.790951 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.793335 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:13:24.058243878 +0000 UTC Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.808816 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.808889 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:21 crc kubenswrapper[4826]: E0131 07:37:21.808980 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:21 crc kubenswrapper[4826]: E0131 07:37:21.809171 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.893945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.893999 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.894012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.894031 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.894045 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.996317 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.996352 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.996360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.996372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:21 crc kubenswrapper[4826]: I0131 07:37:21.996380 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:21Z","lastTransitionTime":"2026-01-31T07:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.098840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.098898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.098915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.098941 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.098959 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.201755 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.201809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.201826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.201848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.201863 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.304301 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.304667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.304821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.305044 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.305215 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.408637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.408712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.408738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.408768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.408790 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.511302 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.511371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.511389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.511416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.511434 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.614019 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.614056 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.614065 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.614080 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.614090 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.716940 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.716992 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.717001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.717014 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.717045 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.794380 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:55:30.868044944 +0000 UTC Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.809000 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:22 crc kubenswrapper[4826]: E0131 07:37:22.809395 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.809145 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:22 crc kubenswrapper[4826]: E0131 07:37:22.809722 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.819471 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.819503 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.819513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.819527 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.819536 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.922727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.922790 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.922807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.922831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:22 crc kubenswrapper[4826]: I0131 07:37:22.922847 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:22Z","lastTransitionTime":"2026-01-31T07:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.025113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.025175 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.025189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.025212 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.025228 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.128422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.129241 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.129345 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.129442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.129541 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.232612 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.232661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.232677 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.232699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.232717 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.335469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.335529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.335544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.335561 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.335574 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.439200 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.439268 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.439290 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.439315 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.439335 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.542564 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.542622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.542693 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.542718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.542734 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.646101 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.646172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.646193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.646216 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.646233 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.762053 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.762113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.762130 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.762155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.762172 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.794551 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:38:09.289513361 +0000 UTC Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.808931 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.808937 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:23 crc kubenswrapper[4826]: E0131 07:37:23.809176 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:23 crc kubenswrapper[4826]: E0131 07:37:23.809321 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.864664 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.864708 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.864721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.864738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.864750 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.967086 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.967136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.967153 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.967175 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:23 crc kubenswrapper[4826]: I0131 07:37:23.967193 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:23Z","lastTransitionTime":"2026-01-31T07:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.069774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.069816 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.069827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.069844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.069855 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.172760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.173742 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.174018 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.174457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.174861 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.277930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.278268 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.278402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.278533 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.278649 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.381316 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.381369 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.381386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.381409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.381425 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.486311 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.486384 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.486401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.486425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.486444 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.590271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.590341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.590361 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.590385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.590404 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.694114 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.694153 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.694164 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.694181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.694193 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.795134 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:03:19.327320459 +0000 UTC Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.797305 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.797392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.797405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.797422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.797435 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.808184 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.808295 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:24 crc kubenswrapper[4826]: E0131 07:37:24.808429 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:24 crc kubenswrapper[4826]: E0131 07:37:24.808537 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.900190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.900250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.900271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.900295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:24 crc kubenswrapper[4826]: I0131 07:37:24.900314 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:24Z","lastTransitionTime":"2026-01-31T07:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.003540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.003593 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.003604 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.003622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.003634 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.106013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.106087 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.106105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.106129 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.106147 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.209169 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.209243 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.209261 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.209285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.209302 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.311160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.311199 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.311209 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.311224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.311235 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.413828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.413870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.413882 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.413898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.413909 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.517576 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.517624 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.517635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.517653 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.517666 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.620774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.621602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.621641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.621669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.621688 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.724873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.724922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.724935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.724951 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.724962 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.805052 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:50:50.917975122 +0000 UTC Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.808428 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.808458 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:25 crc kubenswrapper[4826]: E0131 07:37:25.808662 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:25 crc kubenswrapper[4826]: E0131 07:37:25.809908 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.824681 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.827347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.827400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.827425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.827454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.827477 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.930148 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.930206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.930224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.930248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:25 crc kubenswrapper[4826]: I0131 07:37:25.930265 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:25Z","lastTransitionTime":"2026-01-31T07:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.033124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.033196 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.033224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.033250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.033267 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.136051 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.136119 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.136136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.136160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.136178 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.239932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.240008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.240021 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.240041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.240055 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.306470 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.306544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.306564 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.306588 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.306606 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.327170 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:26Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.332959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.333126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.333149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.333776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.333845 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.355424 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:26Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.361112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.361171 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.361188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.361213 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.361233 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.381453 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:26Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.387089 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.387149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.387167 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.387190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.387262 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.407044 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:26Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.411670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.411724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.411742 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.411768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.411788 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.432369 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:26Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.432649 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.434252 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.434305 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.434323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.434345 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.434362 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.538346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.538428 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.538444 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.538466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.538513 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.641945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.642049 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.642073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.642098 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.642116 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.745592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.745652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.745674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.745703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.745724 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.805488 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:20:35.774732379 +0000 UTC Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.808285 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.808478 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.809038 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:26 crc kubenswrapper[4826]: E0131 07:37:26.809195 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.848767 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.848808 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.848819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.848835 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.848872 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.951908 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.951951 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.951999 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.952032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:26 crc kubenswrapper[4826]: I0131 07:37:26.952057 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:26Z","lastTransitionTime":"2026-01-31T07:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.054456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.054511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.054528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.054552 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.054569 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.157234 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.157280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.157291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.157311 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.157323 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.260709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.260769 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.260785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.260851 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.260911 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.363857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.363930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.363948 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.364005 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.364025 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.467476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.467540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.467562 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.467590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.467614 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.570299 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.570363 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.570384 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.570412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.570431 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.672691 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.672758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.672781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.672811 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.672837 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.775299 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.775365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.775388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.775418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.775439 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.805836 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:57:14.30381604 +0000 UTC Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.808310 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.808310 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:27 crc kubenswrapper[4826]: E0131 07:37:27.808568 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:27 crc kubenswrapper[4826]: E0131 07:37:27.808652 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.878429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.878491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.878509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.878532 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.878551 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.980808 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.980859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.980874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.980926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:27 crc kubenswrapper[4826]: I0131 07:37:27.980946 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:27Z","lastTransitionTime":"2026-01-31T07:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.084479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.084552 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.084570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.084594 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.084615 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.186825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.186947 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.186996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.187025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.187042 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.290832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.290882 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.290902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.290928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.290951 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.393376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.393412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.393423 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.393440 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.393454 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.496619 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.496658 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.496669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.496687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.496704 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.599514 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.599586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.599611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.599643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.599665 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.702636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.702692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.702709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.702734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.702751 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.804910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.805050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.805075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.805098 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.805116 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.806136 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:43:42.551272071 +0000 UTC Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.808560 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:28 crc kubenswrapper[4826]: E0131 07:37:28.808701 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.808566 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:28 crc kubenswrapper[4826]: E0131 07:37:28.809259 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.834706 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.856370 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.871802 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.894171 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.907200 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.907249 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.907266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.907289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.907306 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:28Z","lastTransitionTime":"2026-01-31T07:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.920955 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.940123 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.957227 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.981685 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:28 crc kubenswrapper[4826]: I0131 07:37:28.999113 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:28Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.011000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.011083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.011104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.011145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.011163 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.020547 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a77305-8126-4747-b0fb-6ac1e27be524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.042375 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.060015 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.090271 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.103871 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.114530 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.114592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.114609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.114632 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.114649 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.123060 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.142516 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.159614 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.177664 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.191265 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:29Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.217407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.217461 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.217482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.217508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.217526 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.321080 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.321131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.321145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.321163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.321190 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.424125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.424201 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.424228 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.424312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.424338 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.527374 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.527442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.527464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.527493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.527515 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.630314 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.630388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.630416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.630448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.630472 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.733962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.734047 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.734066 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.734090 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.734107 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.806886 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:46:23.279824324 +0000 UTC Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.808227 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.808236 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:29 crc kubenswrapper[4826]: E0131 07:37:29.808349 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:29 crc kubenswrapper[4826]: E0131 07:37:29.808424 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.836735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.836821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.836847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.836878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.836902 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.939734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.939788 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.939799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.939817 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:29 crc kubenswrapper[4826]: I0131 07:37:29.939832 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:29Z","lastTransitionTime":"2026-01-31T07:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.043188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.043247 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.043261 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.043283 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.043302 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.146007 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.146053 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.146063 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.146077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.146088 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.249672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.249734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.249752 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.249776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.249795 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.352525 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.352595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.352636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.352668 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.352691 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.455935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.456028 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.456048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.456072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.456093 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.559170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.559224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.559241 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.559269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.559287 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.661250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.661311 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.661329 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.661353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.661372 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.764852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.764920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.764938 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.765000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.765020 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.807604 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:31:29.424893466 +0000 UTC Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.809019 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.809124 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:30 crc kubenswrapper[4826]: E0131 07:37:30.809280 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:30 crc kubenswrapper[4826]: E0131 07:37:30.809739 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.810083 4826 scope.go:117] "RemoveContainer" containerID="ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.867756 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.867816 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.867841 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.867864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.867876 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.970956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.971022 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.971034 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.971052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:30 crc kubenswrapper[4826]: I0131 07:37:30.971064 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:30Z","lastTransitionTime":"2026-01-31T07:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.088271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.088303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.088312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.088325 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.088333 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.196646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.197039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.197050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.197078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.197087 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.299520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.299551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.299563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.299578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.299588 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.331111 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/2.log" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.333454 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.333827 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.346456 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.356355 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.372300 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.381666 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.402641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.402691 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.402709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.402728 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.402742 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.410364 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.424626 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.439950 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.454067 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a77305-8126-4747-b0fb-6ac1e27be524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.467084 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.486363 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.498985 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.505292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.505334 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.505346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.505364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.505375 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.508191 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.534289 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.548359 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.572300 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.596155 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.608072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.608159 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.608181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.608208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.608230 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.615269 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.637353 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.656840 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:31Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.711718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.711798 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.711823 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.711857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.711882 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.807949 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:04:51.707752474 +0000 UTC Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.808158 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.808226 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:31 crc kubenswrapper[4826]: E0131 07:37:31.808305 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:31 crc kubenswrapper[4826]: E0131 07:37:31.808539 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.813881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.813930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.813949 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.813997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.814018 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.916568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.916631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.916648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.916719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:31 crc kubenswrapper[4826]: I0131 07:37:31.916738 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:31Z","lastTransitionTime":"2026-01-31T07:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.019627 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.019698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.019717 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.019743 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.019762 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.122893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.122956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.123045 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.123100 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.123120 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.227354 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.227414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.227430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.227454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.227472 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.330564 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.330625 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.330647 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.330673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.330694 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.340071 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/3.log" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.341114 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/2.log" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.345602 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" exitCode=1 Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.345670 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.345757 4826 scope.go:117] "RemoveContainer" containerID="ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.347172 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.347480 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.363758 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.383172 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.399842 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.432279 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.433535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.433560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.433569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.433582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.433593 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.451174 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.469067 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.482334 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a77305-8126-4747-b0fb-6ac1e27be524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.501330 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.519784 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.535399 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.536875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.536953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.537013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.537062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.537088 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.547810 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.579071 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf6d3312839b88263ec4c59e54aa012ea0ae8e154421339e6afa05bbdfda85f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:36:59Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 07:36:59.734419 6450 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:36:59.734455 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 07:36:59.734461 6450 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 07:36:59.734474 6450 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 07:36:59.734513 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 07:36:59.734518 6450 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 07:36:59.734548 6450 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:36:59.734563 6450 factory.go:656] Stopping watch factory\\\\nI0131 07:36:59.734576 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0131 07:36:59.734618 6450 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:36:59.734629 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 07:36:59.734637 6450 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 07:36:59.734644 6450 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:36:59.734652 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 07:36:59.734659 6450 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:31Z\\\",\\\"message\\\":\\\"ace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver\\\\nI0131 07:37:31.704647 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator\\\\nI0131 07:37:31.704652 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd\\\\nI0131 07:37:31.704658 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra\\\\nI0131 07:37:31.703999 6894 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0131 07:37:31.704695 6894 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0131 07:37:31.704485 6894 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:37:31.705056 6894 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:37:31.705120 6894 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:37:31.705086 6894 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:37:31.705142 6894 factory.go:656] Stopping watch factory\\\\nI0131 07:37:31.705172 6894 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:37:31.705198 6894 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.590502 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.609560 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.619938 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.630393 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.639677 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.639735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.639753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.639779 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.639798 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.648312 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.669005 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.682029 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:32Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.742820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.742856 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.742866 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.742883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.742894 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.758449 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.758743 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.758722069 +0000 UTC m=+148.612608438 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.808168 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.808265 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:36:00.839068496 +0000 UTC Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.808291 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.808343 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.808537 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.845712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.845749 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.845761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.845795 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.845807 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.859373 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.859423 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.859450 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.859480 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859591 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859609 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859622 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859663 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.859648444 +0000 UTC m=+148.713534813 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859823 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859838 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859847 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.859874 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.85986443 +0000 UTC m=+148.713750799 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.860041 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.860076 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.860065656 +0000 UTC m=+148.713952035 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.860247 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: E0131 07:37:32.860381 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.860353954 +0000 UTC m=+148.714240343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.948487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.948564 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.948582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.948607 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:32 crc kubenswrapper[4826]: I0131 07:37:32.948628 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:32Z","lastTransitionTime":"2026-01-31T07:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.051869 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.051946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.052013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.052048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.052071 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.155026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.155118 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.155142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.155168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.155189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.258875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.258936 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.258953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.259008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.259026 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.355775 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/3.log" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.360592 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:37:33 crc kubenswrapper[4826]: E0131 07:37:33.360833 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.362163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.362228 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.362253 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.362278 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.362301 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.374053 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a77305-8126-4747-b0fb-6ac1e27be524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.387331 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.399956 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.419154 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.435048 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.450438 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.465836 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.465892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.466039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.466084 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.466109 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.470852 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.492631 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.513775 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:31Z\\\",\\\"message\\\":\\\"ace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver\\\\nI0131 07:37:31.704647 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator\\\\nI0131 07:37:31.704652 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd\\\\nI0131 07:37:31.704658 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra\\\\nI0131 07:37:31.703999 6894 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0131 07:37:31.704695 6894 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0131 07:37:31.704485 6894 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:37:31.705056 6894 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:37:31.705120 6894 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:37:31.705086 6894 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:37:31.705142 6894 factory.go:656] Stopping watch factory\\\\nI0131 07:37:31.705172 6894 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:37:31.705198 6894 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:37:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.527727 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.545401 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.557466 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.569111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.569170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.569196 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.569226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.569286 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.572737 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.589441 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.619505 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.638090 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.657358 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.672429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.672529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.672554 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.672587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.672609 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.679941 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.696311 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:33Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.775744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.775786 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.775797 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.775815 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.775827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.808466 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:34:25.605133258 +0000 UTC Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.808682 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.808768 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:33 crc kubenswrapper[4826]: E0131 07:37:33.808878 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:33 crc kubenswrapper[4826]: E0131 07:37:33.809009 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.878199 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.878260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.878279 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.878303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.878321 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.982027 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.982092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.982123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.982147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:33 crc kubenswrapper[4826]: I0131 07:37:33.982164 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:33Z","lastTransitionTime":"2026-01-31T07:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.084689 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.084723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.084732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.084745 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.084754 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.187626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.187669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.187681 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.187696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.187707 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.290347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.290397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.290408 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.290423 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.290437 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.393260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.393682 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.393700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.393723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.393744 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.497052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.497117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.497141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.497173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.497193 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.599634 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.599680 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.599697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.599720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.599737 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.701878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.701928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.701945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.701992 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.702011 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.804560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.804618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.804638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.804660 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.804678 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.809014 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:09:43.72138242 +0000 UTC Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.809141 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.809159 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:34 crc kubenswrapper[4826]: E0131 07:37:34.809687 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:34 crc kubenswrapper[4826]: E0131 07:37:34.809788 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.907432 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.907506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.907519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.907537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:34 crc kubenswrapper[4826]: I0131 07:37:34.907550 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:34Z","lastTransitionTime":"2026-01-31T07:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.010650 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.010694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.010720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.010736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.010750 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.113332 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.113362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.113370 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.113415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.113426 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.216030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.216095 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.216108 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.216126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.216139 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.322032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.322072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.322081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.322095 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.322105 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.424814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.424863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.424875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.424891 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.424903 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.527459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.527508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.527519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.527535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.527546 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.629761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.629843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.629865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.629888 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.629906 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.733233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.733299 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.733316 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.733341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.733361 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.809008 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.809117 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:35 crc kubenswrapper[4826]: E0131 07:37:35.809248 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.809228 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:25:22.313073226 +0000 UTC Jan 31 07:37:35 crc kubenswrapper[4826]: E0131 07:37:35.809377 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.836062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.836141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.836161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.836184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.836204 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.938900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.938954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.938993 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.939016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:35 crc kubenswrapper[4826]: I0131 07:37:35.939035 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:35Z","lastTransitionTime":"2026-01-31T07:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.042336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.042400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.042417 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.042441 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.042460 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.145451 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.145573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.145599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.145632 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.145655 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.248549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.248602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.248617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.248638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.248652 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.352206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.352299 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.352323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.352354 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.352377 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.455033 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.455093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.455114 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.455138 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.455154 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.558206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.558271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.558291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.558317 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.558336 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.661732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.661791 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.661809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.661832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.661854 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.663372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.663430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.663454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.663480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.663500 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.683742 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.689043 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.689084 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.689099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.689120 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.689135 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.709302 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.714857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.714926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.714947 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.715011 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.715038 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.735380 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.740863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.740912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.740932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.740955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.741020 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.761713 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.767127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.767218 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.767239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.767297 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.767317 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.787447 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:36Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.787688 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.790124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.790203 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.790236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.790386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.790422 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.808933 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.808939 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.809192 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:36 crc kubenswrapper[4826]: E0131 07:37:36.809293 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.809354 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:19:58.245301302 +0000 UTC Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.892927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.893015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.893035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.893099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:36 crc kubenswrapper[4826]: I0131 07:37:36.893148 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:36Z","lastTransitionTime":"2026-01-31T07:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.000648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.000690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.000702 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.000719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.000732 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.104245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.104308 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.104326 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.104350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.104368 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.206809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.206874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.206892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.206915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.206932 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.309626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.309668 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.309680 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.309696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.309709 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.412336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.412383 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.412401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.412425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.412442 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.515699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.515755 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.515772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.515792 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.515810 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.619178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.619222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.619236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.619255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.619287 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.722050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.722110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.722130 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.722153 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.722170 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.808364 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.808388 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:37 crc kubenswrapper[4826]: E0131 07:37:37.808613 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:37 crc kubenswrapper[4826]: E0131 07:37:37.808694 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.809788 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:07:56.059903463 +0000 UTC Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.824034 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.824288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.824436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.824585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.824739 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.927494 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.927556 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.927578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.927611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:37 crc kubenswrapper[4826]: I0131 07:37:37.927634 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:37Z","lastTransitionTime":"2026-01-31T07:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.029779 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.029850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.029867 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.029892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.029910 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.133626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.133673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.133757 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.133811 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.133829 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.236509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.236589 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.236613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.236642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.236664 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.339494 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.339546 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.339562 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.339588 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.339606 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.443015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.443078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.443096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.443120 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.443137 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.546390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.546454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.546473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.546500 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.546518 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.649417 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.649820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.650093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.650356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.650597 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.753310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.753346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.753356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.753371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.753381 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.808269 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.808313 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:38 crc kubenswrapper[4826]: E0131 07:37:38.808479 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:38 crc kubenswrapper[4826]: E0131 07:37:38.808641 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.810558 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:42:57.262682383 +0000 UTC Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.823691 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a77305-8126-4747-b0fb-6ac1e27be524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.840918 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.856995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.857074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.857093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.857122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.857141 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.858840 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.873276 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.895281 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:31Z\\\",\\\"message\\\":\\\"ace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver\\\\nI0131 07:37:31.704647 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator\\\\nI0131 07:37:31.704652 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd\\\\nI0131 07:37:31.704658 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra\\\\nI0131 07:37:31.703999 6894 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0131 07:37:31.704695 6894 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0131 07:37:31.704485 6894 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:37:31.705056 6894 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:37:31.705120 6894 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:37:31.705086 6894 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:37:31.705142 6894 factory.go:656] Stopping watch factory\\\\nI0131 07:37:31.705172 6894 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:37:31.705198 6894 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:37:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.912705 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.935871 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.948118 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.960000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.960032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.960042 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.960061 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.960076 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:38Z","lastTransitionTime":"2026-01-31T07:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.961498 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.973555 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.987322 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:38 crc kubenswrapper[4826]: I0131 07:37:38.999064 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:38Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.013642 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.037458 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.054368 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.064244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.064320 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.064340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.064368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.064388 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.090289 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.111294 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.130879 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.153750 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:39Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.168312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.168364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.168381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.168404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.168422 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.272270 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.272331 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.272348 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.272375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.272398 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.376258 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.376336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.376359 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.376387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.376411 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.479612 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.479672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.479690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.479714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.479731 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.583031 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.583105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.583129 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.583161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.583181 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.686492 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.686550 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.686568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.686592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.686610 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.789355 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.789414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.789429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.789448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.789464 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.809158 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.809170 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:39 crc kubenswrapper[4826]: E0131 07:37:39.809326 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:39 crc kubenswrapper[4826]: E0131 07:37:39.809719 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.810823 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:15:59.793833066 +0000 UTC Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.892308 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.892665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.892794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.892902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.893027 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.995665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.995725 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.995744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.995772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:39 crc kubenswrapper[4826]: I0131 07:37:39.995792 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:39Z","lastTransitionTime":"2026-01-31T07:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.098560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.098618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.098635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.098662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.098683 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.201099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.201152 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.201169 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.201193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.201217 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.304106 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.304161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.304178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.304205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.304227 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.407145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.407198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.407215 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.407240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.407257 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.510303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.510359 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.510375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.510400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.510416 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.614077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.614133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.614156 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.614184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.614203 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.717052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.717125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.717149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.717178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.717201 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.809065 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.809103 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.813197 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:00:19.934726861 +0000 UTC Jan 31 07:37:40 crc kubenswrapper[4826]: E0131 07:37:40.815676 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:40 crc kubenswrapper[4826]: E0131 07:37:40.816287 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.820189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.820214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.820223 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.820234 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.820243 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.922789 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.922852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.922871 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.922896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:40 crc kubenswrapper[4826]: I0131 07:37:40.922914 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:40Z","lastTransitionTime":"2026-01-31T07:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.026013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.026205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.026240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.026270 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.026294 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.129845 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.129892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.129904 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.129924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.129939 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.233102 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.233154 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.233171 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.233196 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.233214 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.336384 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.336444 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.336457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.336477 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.336490 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.440268 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.440335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.440357 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.440385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.440406 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.543475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.543605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.543636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.543670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.543696 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.647186 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.647253 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.647271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.647295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.647313 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.750515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.750544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.750555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.750570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.750581 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.808506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.808530 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:41 crc kubenswrapper[4826]: E0131 07:37:41.808660 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:41 crc kubenswrapper[4826]: E0131 07:37:41.808881 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.813619 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:09:13.021437722 +0000 UTC Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.853521 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.853566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.853584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.853605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.853624 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.956904 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.956945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.956954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.956983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:41 crc kubenswrapper[4826]: I0131 07:37:41.956993 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:41Z","lastTransitionTime":"2026-01-31T07:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.060145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.060223 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.060245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.060278 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.060300 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.162395 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.162438 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.162451 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.162467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.162477 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.265477 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.265515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.265525 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.265541 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.265553 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.368527 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.368569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.368584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.368600 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.368613 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.471183 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.471241 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.471258 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.471282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.471299 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.573672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.573736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.573753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.573778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.573796 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.676480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.676549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.676570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.676598 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.676621 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.780685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.780744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.780760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.780785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.780806 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.808064 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.808066 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:42 crc kubenswrapper[4826]: E0131 07:37:42.808638 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:42 crc kubenswrapper[4826]: E0131 07:37:42.808759 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.814061 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:14:08.23126853 +0000 UTC Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.883917 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.884026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.884043 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.884071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.884089 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.985853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.985920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.985943 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.986001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:42 crc kubenswrapper[4826]: I0131 07:37:42.986026 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:42Z","lastTransitionTime":"2026-01-31T07:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.088509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.088567 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.088584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.088606 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.088624 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.191377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.191425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.191436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.191452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.191464 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.294198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.294273 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.294287 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.294308 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.294329 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.396621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.396667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.396678 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.396693 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.396705 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.499105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.499137 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.499146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.499161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.499171 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.602387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.602448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.602496 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.602520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.602538 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.704513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.704549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.704558 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.704573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.704582 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.807858 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.807918 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.807880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.808020 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.808037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.808059 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.808076 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:43 crc kubenswrapper[4826]: E0131 07:37:43.808136 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:43 crc kubenswrapper[4826]: E0131 07:37:43.808046 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.815307 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:41:50.052158788 +0000 UTC Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.909801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.909839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.909850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.909864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:43 crc kubenswrapper[4826]: I0131 07:37:43.909876 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:43Z","lastTransitionTime":"2026-01-31T07:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.012580 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.012641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.012657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.012681 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.012698 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.116074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.116140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.116151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.116170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.116182 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.219522 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.219571 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.219597 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.219615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.219626 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.322347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.322419 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.322442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.322473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.322496 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.425849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.425912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.425935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.425959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.426015 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.529207 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.529258 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.529282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.529309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.529330 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.631810 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.631879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.631901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.631929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.631953 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.734289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.734360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.734381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.734409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.734446 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.808555 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.808714 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:44 crc kubenswrapper[4826]: E0131 07:37:44.808826 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:44 crc kubenswrapper[4826]: E0131 07:37:44.808902 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.816432 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:20:08.53041082 +0000 UTC Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.837385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.837433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.837443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.837458 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.837469 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.941074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.941112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.941124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.941141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:44 crc kubenswrapper[4826]: I0131 07:37:44.941154 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:44Z","lastTransitionTime":"2026-01-31T07:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.044236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.044292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.044309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.044333 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.044350 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.147236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.147281 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.147293 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.147310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.147322 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.249712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.249758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.249773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.249793 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.249807 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.352583 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.352664 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.352691 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.352722 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.352742 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.455462 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.455519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.455537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.455566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.455586 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.558230 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.558274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.558288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.558304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.558316 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.660749 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.660802 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.660820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.660842 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.660862 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.764050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.764123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.764147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.764177 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.764198 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.808646 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.808699 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:45 crc kubenswrapper[4826]: E0131 07:37:45.808820 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:45 crc kubenswrapper[4826]: E0131 07:37:45.808897 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.817006 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:04:34.835151002 +0000 UTC Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.866551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.866598 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.866607 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.866644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.866656 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.969853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.969929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.969948 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.970011 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:45 crc kubenswrapper[4826]: I0131 07:37:45.970034 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:45Z","lastTransitionTime":"2026-01-31T07:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.073172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.073242 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.073266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.073295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.073318 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.175809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.175874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.175897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.175926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.175946 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.279416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.279491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.279509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.279531 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.279549 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.382188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.382259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.382342 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.382419 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.382447 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.485348 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.485442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.485459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.485482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.485501 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.588043 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.588081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.588091 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.588104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.588113 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.690738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.690807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.690832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.690862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.690884 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.793618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.793778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.793818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.793852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.793874 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.808573 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.808612 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:46 crc kubenswrapper[4826]: E0131 07:37:46.808778 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:46 crc kubenswrapper[4826]: E0131 07:37:46.809019 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.817146 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:35:43.46700886 +0000 UTC Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.896466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.896524 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.896541 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.896567 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.896585 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.942485 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.942560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.942586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.942615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.942635 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: E0131 07:37:46.963669 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:46Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.969517 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.969596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.969623 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.969653 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.969677 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:46 crc kubenswrapper[4826]: E0131 07:37:46.987821 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:46Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.992823 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.992913 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.992929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.992951 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:46 crc kubenswrapper[4826]: I0131 07:37:46.992994 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:46Z","lastTransitionTime":"2026-01-31T07:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: E0131 07:37:47.008316 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.012221 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.012259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.012269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.012286 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.012298 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: E0131 07:37:47.027206 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.032494 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.032574 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.032599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.032635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.032663 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: E0131 07:37:47.047582 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:47Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:47 crc kubenswrapper[4826]: E0131 07:37:47.047749 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.049905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.049942 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.049956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.050007 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.050022 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.153372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.153447 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.153466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.153493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.153510 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.256142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.256198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.256214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.256234 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.256250 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.358926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.359032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.359060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.359132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.359153 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.462039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.462106 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.462124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.462150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.462167 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.565439 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.565519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.565616 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.565651 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.565671 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.668395 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.668456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.668474 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.668499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.668517 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.772191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.772244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.772262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.772285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.772302 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.808619 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:47 crc kubenswrapper[4826]: E0131 07:37:47.808820 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.809440 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:47 crc kubenswrapper[4826]: E0131 07:37:47.809645 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.817867 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:36:22.264296404 +0000 UTC Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.875410 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.875470 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.875482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.875500 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.875511 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.977953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.978004 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.978013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.978026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:47 crc kubenswrapper[4826]: I0131 07:37:47.978037 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:47Z","lastTransitionTime":"2026-01-31T07:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.014723 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:48 crc kubenswrapper[4826]: E0131 07:37:48.014964 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:37:48 crc kubenswrapper[4826]: E0131 07:37:48.015134 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs podName:251ad51e-c383-4684-bfdb-2b9ce8098cc6 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:52.01510488 +0000 UTC m=+163.868991279 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs") pod "network-metrics-daemon-qrw7j" (UID: "251ad51e-c383-4684-bfdb-2b9ce8098cc6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.081608 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.081669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.081701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.081727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.081746 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.184511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.184579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.184602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.184635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.184659 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.287619 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.287686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.287703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.287727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.287749 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.390736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.390819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.390842 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.390884 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.390907 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.493340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.493405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.493425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.493448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.493467 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.595990 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.596048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.596060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.596080 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.596094 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.698284 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.698345 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.698357 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.698378 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.698391 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.801136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.801197 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.801216 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.801242 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.801259 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.808625 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.809092 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:48 crc kubenswrapper[4826]: E0131 07:37:48.809261 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:48 crc kubenswrapper[4826]: E0131 07:37:48.809383 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.809674 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:37:48 crc kubenswrapper[4826]: E0131 07:37:48.809924 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.818684 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:20:11.138940864 +0000 UTC Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.825495 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37289ae623d633b8d9c41697e0022bac0ef40df6da836d43df3bb36e83a7a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.839273 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1814f450832909bd5bf1c4749feef46b0919e997cb1d241c988f4ec81051d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e440dc1f7846af93bbe78f17e92d44d2f72ee0fc28c4c3b746ee3f67f0cd899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.856143 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed10f53b-565a-4d14-a1d8-feabc15f08ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d04378db699fb51e197d67da8fbc728904d522724c2a02158c546497905d106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27tlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8v6ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.871868 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wtbb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b672fd90-a70c-4f27-b711-e58f269efccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:17Z\\\",\\\"message\\\":\\\"2026-01-31T07:36:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc\\\\n2026-01-31T07:36:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c53c4ff3-1503-4424-b047-c864bb103fcc to /host/opt/cni/bin/\\\\n2026-01-31T07:36:32Z [verbose] multus-daemon started\\\\n2026-01-31T07:36:32Z [verbose] Readiness Indicator file check\\\\n2026-01-31T07:37:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wtbb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.890751 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0580356-3fa0-4446-b20b-afc4164435c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e817cf54f4d9940c02491ebbe29ded4c0e8b3832c315b04eba99a9d36d267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b99d4cba5d569a977b7d05a7190498a4e34fefbf7dc44db8148ba398de0f5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://adf31702fe035db3a963262acd0c20802f8fd7230cf5c01cb578fa88179bd129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://161a31c4d5a9b58bef37d7177019e6eec0b53c956877c212337834672f4ac5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1796ef962408ccd32578b856d0339f68286f421ffbb915e0ae46c7a5989d1958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c61fec63f5addbf0dc12061e902690061810742fc2b0155e5cb65bf6543388b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08905802d5ea6db4c70bfaba57a35ece633528db95e9b463d5b4901be7071a36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58eff42e5867a805ab4a08fd36cfeae9aee2355f82e8c56355319e958253660\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.904326 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.904381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.904392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.904409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.904419 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:48Z","lastTransitionTime":"2026-01-31T07:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.907583 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"661bc240-bc34-44a0-bd2f-72708c4a5dc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc26e2f01400aeeb0956ae2aef52df0251350f193805892e534081b4cac248e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4124eac7bdc757ef7c1f68b28fb1f54a56cda06e9b781a026277efe04172f8e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e986869735ddfd85e88828d0123e35bbc507c1d31efba8c420632539ff4a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.923485 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d2b49940601c069c38a05027bd5e801dd377d4aed46a502c192ca9941468a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.939037 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e6f9c24-a18c-49d3-9d30-c6c0af2e7a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baebdb6a236b3414ff5aac56d2f2c04503b7d59c18aa78ef295484087a866b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c632689f0b6d6395406d8257f0700864af3f247bf73af9c165acd06e660409db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e9dacc744c0fca0149f482b0d74687641c77561831c69ddb4dd8eef33a9fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85bd61fe7138b6ef1ced8d0131580f28ba4b7876fbd31edb630d9db008ec697f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf105a2d337bc7912487fc6e2e0ebfc20e74a55d9c1832fe9f77ea4ad4cbce7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2dd19b04a968d653572a2a7ec5e05a919977f4659758eaf89ac1305a409476e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9c0eb924928bb2a7752b55e6ccb0fccc514359751bdf295f4f88820ac0bddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpws5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5fm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.952391 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"251ad51e-c383-4684-bfdb-2b9ce8098cc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv8tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrw7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.963059 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a77305-8126-4747-b0fb-6ac1e27be524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6744ae097d36ec0c3998da84dd5d0b9a274604c91b97d324f317381c9ba7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db00cb3b7ea8c50c1725ebdf0d9eab1e7e1f96fbfe471ec206e102e74cefd82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.974200 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.986331 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32b69655-b895-4001-8f50-17a9d73056f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6e04fe556789c17beb1c110cda356f63730016f0d8f00d968ec9af1a10f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15eef89d03854f4fb45924748ccf1c12133be03efd45374daa2cceafd673136a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6xcl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7xbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:48 crc kubenswrapper[4826]: I0131 07:37:48.998320 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63a72bdc-ae2c-4ce3-bad4-877f01e2b370\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"iserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 07:36:22.700868 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 07:36:22.705249 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2348209780/tls.crt::/tmp/serving-cert-2348209780/tls.key\\\\\\\"\\\\nI0131 07:36:28.522043 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 07:36:28.528635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 07:36:28.528685 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 07:36:28.528725 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 07:36:28.528736 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 07:36:28.539744 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 07:36:28.539766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539773 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 07:36:28.539779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 07:36:28.539783 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 07:36:28.539787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 07:36:28.539791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 07:36:28.539944 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 07:36:28.545260 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0131 07:36:28.545358 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:48Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.007410 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.007449 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.007460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.007475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.007486 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.009193 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a4082a-cdda-4613-8e8d-bd97c3820759\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb15aeef1103cdf26c3e72bac394815e4ea864cccc8732f40a601af2fcd34b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d04dbf9901e3bd43ac93338643a45135bb89f2d03a5a3d1236289aee90a6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a09b3371dd08b059fbfd5a105f314ae92bdcc4804cd36d0709b4afae48842b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b6360543a81dea5540a5f02edc908d3b76daf5c4df5f8220215fb75f9504c5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.021846 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.033893 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.044041 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tp2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d1fbcfd-bce9-4f1c-bc64-d48b979a95d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91dfef0e05500b5bc7d60a1905667d2ddf67a56af60dc72a55d1b29223d6586a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x55d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tp2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.064221 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db04412-b62f-417c-91ac-776767d6102f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T07:37:31Z\\\",\\\"message\\\":\\\"ace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver\\\\nI0131 07:37:31.704647 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator\\\\nI0131 07:37:31.704652 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd\\\\nI0131 07:37:31.704658 6894 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra\\\\nI0131 07:37:31.703999 6894 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0131 07:37:31.704695 6894 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0131 07:37:31.704485 6894 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 07:37:31.705056 6894 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 07:37:31.705120 6894 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 07:37:31.705086 6894 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 07:37:31.705142 6894 factory.go:656] Stopping watch factory\\\\nI0131 07:37:31.705172 6894 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 07:37:31.705198 6894 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T07:37:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T07:36:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T07:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvlrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvwnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.075578 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6lwnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b53fa37-2f8b-49d4-bd96-2bfe008beba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T07:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4800822b9eb7af18af4eaf5699b582b98e900e6a645e17ec2a5a09757777ad75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T07:36:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm9x7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T07:36:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6lwnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:49Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.109578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.109629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.109641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.109659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.109671 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.213180 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.213241 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.213256 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.213282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.213298 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.316214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.316261 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.316273 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.316289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.316300 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.418161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.418225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.418242 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.418265 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.418283 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.521842 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.521915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.521939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.522262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.522304 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.625482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.625612 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.625638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.625670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.625693 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.728453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.728529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.728547 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.728573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.728591 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.808603 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.808666 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:49 crc kubenswrapper[4826]: E0131 07:37:49.808778 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:49 crc kubenswrapper[4826]: E0131 07:37:49.808920 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.818742 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:37:15.632073388 +0000 UTC Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.830504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.830557 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.830573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.830590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.830602 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.933053 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.933123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.933147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.933176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:49 crc kubenswrapper[4826]: I0131 07:37:49.933197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:49Z","lastTransitionTime":"2026-01-31T07:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.035687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.035753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.035774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.035799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.035815 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.138467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.138533 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.138549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.138570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.138584 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.240901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.240943 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.240955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.240983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.240994 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.344373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.344492 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.344510 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.344534 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.344551 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.447004 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.447074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.447100 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.447133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.447154 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.549685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.549762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.549780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.549805 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.549828 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.652344 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.652385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.652398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.652414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.652427 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.755015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.755060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.755074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.755093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.755108 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.809539 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.809618 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:50 crc kubenswrapper[4826]: E0131 07:37:50.809770 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:50 crc kubenswrapper[4826]: E0131 07:37:50.809933 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.819421 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:52:43.779111368 +0000 UTC Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.878686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.878755 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.878783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.878814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.878837 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.982951 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.983039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.983054 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.983077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:50 crc kubenswrapper[4826]: I0131 07:37:50.983093 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:50Z","lastTransitionTime":"2026-01-31T07:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.086380 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.086437 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.086460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.086491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.086514 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.189439 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.189491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.189509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.189535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.189554 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.292376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.292474 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.292537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.292565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.292630 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.395295 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.395370 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.395396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.395424 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.395445 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.499201 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.499353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.499375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.499403 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.499456 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.603306 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.603384 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.603403 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.603432 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.603455 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.706416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.706478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.706499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.706528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.706549 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.808086 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.808107 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:51 crc kubenswrapper[4826]: E0131 07:37:51.808252 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:51 crc kubenswrapper[4826]: E0131 07:37:51.808459 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.810621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.810686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.810709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.810745 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.810767 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.820170 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:45:23.321546796 +0000 UTC Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.913881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.913942 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.913958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.914021 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:51 crc kubenswrapper[4826]: I0131 07:37:51.914040 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:51Z","lastTransitionTime":"2026-01-31T07:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.018046 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.018131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.018144 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.018169 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.018184 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.121229 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.121290 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.121306 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.121330 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.121350 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.224643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.224720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.224740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.224764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.224781 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.328260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.328335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.328351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.328377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.328394 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.431555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.431648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.431672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.431702 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.431723 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.534984 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.535030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.535041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.535058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.535070 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.637806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.637879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.637902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.637931 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.637956 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.741499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.741531 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.741539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.741553 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.741578 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.808449 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.808537 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:52 crc kubenswrapper[4826]: E0131 07:37:52.808612 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:52 crc kubenswrapper[4826]: E0131 07:37:52.808738 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.820264 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:09:34.116097316 +0000 UTC Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.844786 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.844851 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.844873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.844899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.844919 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.948535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.948590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.948611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.948632 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:52 crc kubenswrapper[4826]: I0131 07:37:52.948650 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:52Z","lastTransitionTime":"2026-01-31T07:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.051117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.051187 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.051209 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.051238 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.051260 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.153649 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.153698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.153710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.153727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.153741 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.256495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.256529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.256537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.256551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.256560 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.359754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.359796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.359804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.359819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.359829 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.462127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.462255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.462280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.462311 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.462331 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.564948 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.565024 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.565034 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.565052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.565065 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.667030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.667089 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.667107 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.667134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.667157 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.769434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.769475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.769484 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.769498 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.769508 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.808673 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.808704 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:53 crc kubenswrapper[4826]: E0131 07:37:53.808785 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:53 crc kubenswrapper[4826]: E0131 07:37:53.808936 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.820807 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:15:43.305181703 +0000 UTC Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.873008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.873054 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.873090 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.873110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.873125 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.976398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.976461 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.976488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.976518 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:53 crc kubenswrapper[4826]: I0131 07:37:53.976542 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:53Z","lastTransitionTime":"2026-01-31T07:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.080336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.080445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.080472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.080498 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.080516 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.182780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.182840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.182852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.182871 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.182883 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.286030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.286068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.286078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.286092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.286102 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.388445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.388515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.388538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.388570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.388594 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.491849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.491905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.491922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.491945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.491962 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.595387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.595443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.595464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.595493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.595513 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.698389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.698454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.698472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.698502 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.698520 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.801819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.802577 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.802609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.802639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.802659 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.808675 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:54 crc kubenswrapper[4826]: E0131 07:37:54.808816 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.809032 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:54 crc kubenswrapper[4826]: E0131 07:37:54.809119 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.820930 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:23:02.938575642 +0000 UTC Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.906694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.906772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.906799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.906834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:54 crc kubenswrapper[4826]: I0131 07:37:54.906860 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:54Z","lastTransitionTime":"2026-01-31T07:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.009622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.009676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.009694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.009717 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.009733 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.111859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.111896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.111904 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.111917 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.111929 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.215282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.215344 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.215360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.215389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.215412 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.318862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.318924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.318946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.319006 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.319030 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.422042 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.422114 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.422137 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.422168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.422193 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.525577 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.525651 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.525674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.525703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.525728 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.629730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.629808 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.629842 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.629875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.629888 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.731573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.731615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.731627 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.731644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.731656 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.808470 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.808525 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:55 crc kubenswrapper[4826]: E0131 07:37:55.808615 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:55 crc kubenswrapper[4826]: E0131 07:37:55.808677 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.821687 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:31:49.126402353 +0000 UTC Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.833140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.833177 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.833188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.833205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.833219 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.935712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.935777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.935796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.935820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:55 crc kubenswrapper[4826]: I0131 07:37:55.935841 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:55Z","lastTransitionTime":"2026-01-31T07:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.040206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.040266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.040282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.040303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.040322 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.143610 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.143648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.143656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.143671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.143680 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.245928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.245995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.246011 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.246030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.246043 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.349121 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.349162 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.349171 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.349186 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.349197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.454710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.454763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.454775 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.454795 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.454810 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.557391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.557447 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.557473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.557501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.557522 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.661183 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.661242 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.661258 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.661284 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.661301 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.763442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.763489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.763501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.763518 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.763529 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.808267 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.808305 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:56 crc kubenswrapper[4826]: E0131 07:37:56.808515 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:56 crc kubenswrapper[4826]: E0131 07:37:56.808542 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.822752 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:34:33.822646172 +0000 UTC Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.866158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.866234 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.866259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.866288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.866308 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.969831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.969865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.969875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.969891 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:56 crc kubenswrapper[4826]: I0131 07:37:56.969901 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:56Z","lastTransitionTime":"2026-01-31T07:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.072371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.072426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.072441 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.072462 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.072477 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.174557 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.174605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.174617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.174637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.174649 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.225230 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.225287 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.225304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.225327 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.225342 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.242393 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.247392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.247442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.247454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.247469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.247480 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.260788 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.264873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.264953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.264981 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.264998 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.265024 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.279063 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.284150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.284194 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.284205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.284219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.284228 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.296401 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.300901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.300946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.300959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.301012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.301062 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.318390 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T07:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"410cfc83-ff74-4210-b833-727c4d6db644\\\",\\\"systemUUID\\\":\\\"5dc4a50b-5ade-4352-ba95-1ca9483f1f64\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T07:37:57Z is after 2025-08-24T17:21:41Z" Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.318608 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.320772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.320832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.320847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.320870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.320891 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.422859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.422894 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.422902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.422915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.422927 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.525801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.525861 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.525879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.525903 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.525923 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.628055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.628144 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.628157 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.628176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.628190 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.731404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.731471 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.731494 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.731524 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.731545 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.808072 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.808245 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.808350 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:57 crc kubenswrapper[4826]: E0131 07:37:57.808444 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.823281 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:30:19.050235807 +0000 UTC Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.834692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.834739 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.834753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.834769 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.834781 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.938041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.938079 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.938092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.938110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:57 crc kubenswrapper[4826]: I0131 07:37:57.938123 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:57Z","lastTransitionTime":"2026-01-31T07:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.041997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.042072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.042096 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.042126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.042146 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.144896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.144958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.145007 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.145037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.145059 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.248592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.249338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.249373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.249402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.249423 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.352735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.352787 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.352804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.352825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.352842 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.455870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.455945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.455997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.456029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.456056 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.571567 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.571613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.571629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.571650 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.571666 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.674272 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.674342 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.674377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.674404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.674425 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.776764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.776862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.776884 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.776911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.776930 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.808179 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.808596 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:37:58 crc kubenswrapper[4826]: E0131 07:37:58.808783 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:37:58 crc kubenswrapper[4826]: E0131 07:37:58.808907 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.823960 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:06:57.037796376 +0000 UTC Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.852753 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.852730521 podStartE2EDuration="1m27.852730521s" podCreationTimestamp="2026-01-31 07:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:58.852643379 +0000 UTC m=+110.706529748" watchObservedRunningTime="2026-01-31 07:37:58.852730521 +0000 UTC m=+110.706616910" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.871064 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.871034345 podStartE2EDuration="1m24.871034345s" podCreationTimestamp="2026-01-31 07:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:58.870771197 +0000 UTC m=+110.724657556" watchObservedRunningTime="2026-01-31 07:37:58.871034345 +0000 UTC m=+110.724920744" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.879842 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.879899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.879917 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.879940 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.879959 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.955470 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5fm7w" podStartSLOduration=89.955433147 podStartE2EDuration="1m29.955433147s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:58.936590249 +0000 UTC m=+110.790476648" watchObservedRunningTime="2026-01-31 07:37:58.955433147 +0000 UTC m=+110.809319556" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.970220 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.970199609 podStartE2EDuration="33.970199609s" podCreationTimestamp="2026-01-31 07:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:58.969760207 +0000 UTC m=+110.823646576" watchObservedRunningTime="2026-01-31 07:37:58.970199609 +0000 UTC m=+110.824085978" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.982470 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.982724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.982842 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.982963 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:58 crc kubenswrapper[4826]: I0131 07:37:58.983110 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:58Z","lastTransitionTime":"2026-01-31T07:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.006676 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7xbj" podStartSLOduration=89.006649141 podStartE2EDuration="1m29.006649141s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.005989423 +0000 UTC m=+110.859875782" watchObservedRunningTime="2026-01-31 07:37:59.006649141 +0000 UTC m=+110.860535540" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.027762 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.027738374 podStartE2EDuration="1m31.027738374s" podCreationTimestamp="2026-01-31 07:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.027726014 +0000 UTC m=+110.881612373" watchObservedRunningTime="2026-01-31 07:37:59.027738374 +0000 UTC m=+110.881624743" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.055084 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.055052445 podStartE2EDuration="1m1.055052445s" podCreationTimestamp="2026-01-31 07:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.053654165 +0000 UTC m=+110.907540584" watchObservedRunningTime="2026-01-31 07:37:59.055052445 +0000 UTC m=+110.908938844" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.086267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.086305 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.086313 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.086330 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.086339 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.143597 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8tp2c" podStartSLOduration=90.143579596 podStartE2EDuration="1m30.143579596s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.121401432 +0000 UTC m=+110.975287801" watchObservedRunningTime="2026-01-31 07:37:59.143579596 +0000 UTC m=+110.997465955" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.154537 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6lwnf" podStartSLOduration=90.154515688 podStartE2EDuration="1m30.154515688s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.153913291 +0000 UTC m=+111.007799650" watchObservedRunningTime="2026-01-31 07:37:59.154515688 +0000 UTC m=+111.008402047" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.188885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.189199 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.189290 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.189405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.189522 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.197918 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podStartSLOduration=90.197896929 podStartE2EDuration="1m30.197896929s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.197325052 +0000 UTC m=+111.051211421" watchObservedRunningTime="2026-01-31 07:37:59.197896929 +0000 UTC m=+111.051783298" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.291770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.291808 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.291816 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.291831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.291840 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.395262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.395303 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.395314 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.395332 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.395345 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.498269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.498333 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.498353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.498378 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.498395 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.601649 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.601698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.601715 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.601738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.601755 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.704908 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.705009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.705054 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.705093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.705112 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807577 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807601 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807612 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807821 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.807918 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:37:59 crc kubenswrapper[4826]: E0131 07:37:59.808045 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:37:59 crc kubenswrapper[4826]: E0131 07:37:59.808452 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.824898 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:40:06.190579135 +0000 UTC Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.910434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.910501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.910523 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.910550 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:37:59 crc kubenswrapper[4826]: I0131 07:37:59.910574 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:37:59Z","lastTransitionTime":"2026-01-31T07:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.013248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.013310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.013334 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.013368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.013394 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.116534 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.116609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.116633 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.116665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.116688 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.219609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.219651 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.219662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.219680 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.219690 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.323075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.323361 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.323435 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.323464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.323483 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.426221 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.426264 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.426279 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.426304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.426323 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.528734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.528807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.528872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.528910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.528943 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.636261 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.636334 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.636356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.636386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.636577 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.739949 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.740030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.740047 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.740070 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.740087 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.809261 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:00 crc kubenswrapper[4826]: E0131 07:38:00.809443 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.809522 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:00 crc kubenswrapper[4826]: E0131 07:38:00.809704 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.825036 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:38:24.154050843 +0000 UTC Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.842837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.842896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.842914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.842943 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.842964 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.946420 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.946474 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.946490 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.946530 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:00 crc kubenswrapper[4826]: I0131 07:38:00.946550 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:00Z","lastTransitionTime":"2026-01-31T07:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.050191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.050262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.050283 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.050309 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.050328 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.152726 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.152800 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.152826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.152855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.152877 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.255781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.255853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.255876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.255905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.255930 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.359467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.359584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.359610 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.359768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.359802 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.462086 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.462161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.462184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.462213 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.462234 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.564189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.564236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.564246 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.564260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.564270 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.666233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.666305 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.666328 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.666366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.666389 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.769273 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.769331 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.769348 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.769373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.769391 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.808717 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.808835 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:01 crc kubenswrapper[4826]: E0131 07:38:01.808919 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:01 crc kubenswrapper[4826]: E0131 07:38:01.809025 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.809757 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:38:01 crc kubenswrapper[4826]: E0131 07:38:01.809953 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvwnb_openshift-ovn-kubernetes(5db04412-b62f-417c-91ac-776767d6102f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.826094 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:32:41.187784514 +0000 UTC Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.872113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.872156 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.872169 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.872185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.872200 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.973768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.973818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.973830 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.973847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:01 crc kubenswrapper[4826]: I0131 07:38:01.973858 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:01Z","lastTransitionTime":"2026-01-31T07:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.076831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.076896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.076914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.077011 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.077055 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.180471 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.180528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.180539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.180557 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.180568 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.283240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.283291 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.283302 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.283318 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.283329 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.386357 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.386419 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.386437 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.386461 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.386480 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.491338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.491396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.491419 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.491448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.491465 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.594318 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.594381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.594392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.594407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.594417 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.697700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.697784 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.697804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.697833 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.697851 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.800908 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.800940 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.800950 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.800999 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.801009 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.808452 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.808469 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:02 crc kubenswrapper[4826]: E0131 07:38:02.808676 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:02 crc kubenswrapper[4826]: E0131 07:38:02.808777 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.826842 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:14:21.386287016 +0000 UTC Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.903963 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.904059 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.904073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.904092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:02 crc kubenswrapper[4826]: I0131 07:38:02.904105 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:02Z","lastTransitionTime":"2026-01-31T07:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.006264 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.006398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.006416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.006445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.006462 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.108491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.108526 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.108537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.108551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.108563 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.211741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.211813 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.211828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.211851 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.211865 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.315204 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.315274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.315287 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.315304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.315316 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.418002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.418098 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.418160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.418191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.418208 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.521312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.521367 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.521383 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.521407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.521426 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.625429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.625503 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.625581 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.625617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.625641 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.728564 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.728629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.728648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.728673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.728691 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.808891 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:03 crc kubenswrapper[4826]: E0131 07:38:03.809607 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.809204 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:03 crc kubenswrapper[4826]: E0131 07:38:03.809941 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.827557 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:46:21.236403603 +0000 UTC Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.832712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.832756 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.832772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.832794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.832811 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.936152 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.936194 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.936203 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.936219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:03 crc kubenswrapper[4826]: I0131 07:38:03.936230 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:03Z","lastTransitionTime":"2026-01-31T07:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.038768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.038826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.038839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.038857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.038870 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.142657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.142727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.142753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.142783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.142804 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.246744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.246801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.246821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.246847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.246889 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.350388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.350488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.350509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.350539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.350562 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.453394 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.453442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.453453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.453469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.453481 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.468144 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/1.log" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.468490 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/0.log" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.468548 4826 generic.go:334] "Generic (PLEG): container finished" podID="b672fd90-a70c-4f27-b711-e58f269efccd" containerID="0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815" exitCode=1 Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.468599 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerDied","Data":"0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.468642 4826 scope.go:117] "RemoveContainer" containerID="3dc6a9e1063a189fc3331ef1bc5bd05ac611295089d1da241ca07c529e63ab7e" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.469775 4826 scope.go:117] "RemoveContainer" containerID="0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815" Jan 31 07:38:04 crc kubenswrapper[4826]: E0131 07:38:04.470183 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wtbb9_openshift-multus(b672fd90-a70c-4f27-b711-e58f269efccd)\"" pod="openshift-multus/multus-wtbb9" podUID="b672fd90-a70c-4f27-b711-e58f269efccd" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.556419 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.556702 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.556874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.557078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.557259 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.660826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.660900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.660922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.660950 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.661019 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.764276 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.764348 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.764374 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.764401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.764417 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.808069 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:04 crc kubenswrapper[4826]: E0131 07:38:04.808247 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.808347 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:04 crc kubenswrapper[4826]: E0131 07:38:04.808552 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.828530 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:52:32.950008172 +0000 UTC Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.867481 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.867535 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.867552 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.867574 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.867592 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.970778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.970865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.970881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.970905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:04 crc kubenswrapper[4826]: I0131 07:38:04.970923 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:04Z","lastTransitionTime":"2026-01-31T07:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.074076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.074131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.074147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.074170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.074186 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.177377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.177695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.177870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.178081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.178209 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.281013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.281274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.281334 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.281397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.281455 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.384327 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.384764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.385018 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.385219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.385360 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.474804 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/1.log" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.487752 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.487800 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.487818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.487840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.487861 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.590645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.590715 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.590732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.590758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.590777 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.693701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.693758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.693769 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.693783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.693792 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.796643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.797087 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.797239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.797387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.797517 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.808287 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.808363 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:05 crc kubenswrapper[4826]: E0131 07:38:05.808842 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:05 crc kubenswrapper[4826]: E0131 07:38:05.808680 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.828606 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:57:33.032707348 +0000 UTC Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.904584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.905048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.905207 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.905362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:05 crc kubenswrapper[4826]: I0131 07:38:05.905488 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:05Z","lastTransitionTime":"2026-01-31T07:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.009094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.009151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.009171 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.009200 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.009224 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.111888 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.112014 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.112034 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.112058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.112075 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.215508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.215554 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.215566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.215582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.215594 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.318245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.318288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.318299 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.318317 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.318328 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.420760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.420817 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.420829 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.420847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.420859 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.523550 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.523605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.523622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.523644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.523660 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.626497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.626553 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.626572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.626592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.626605 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.729690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.729747 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.729763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.729786 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.729805 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.808320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:06 crc kubenswrapper[4826]: E0131 07:38:06.808496 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.808773 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:06 crc kubenswrapper[4826]: E0131 07:38:06.808903 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.830334 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:10:59.663804378 +0000 UTC Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.832160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.832197 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.832208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.832225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.832238 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.936012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.936106 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.936132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.936164 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:06 crc kubenswrapper[4826]: I0131 07:38:06.936188 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:06Z","lastTransitionTime":"2026-01-31T07:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.039371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.039430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.039446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.039467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.039482 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.142546 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.142727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.142761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.142780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.142791 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.245815 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.245905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.245923 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.245949 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.246002 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.351227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.351286 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.351304 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.351331 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.351347 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.455589 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.455676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.455701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.455738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.455763 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.559237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.559313 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.559332 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.559362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.559383 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.577094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.577155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.577180 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.577208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.577226 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T07:38:07Z","lastTransitionTime":"2026-01-31T07:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.649092 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk"] Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.650713 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.655074 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.655304 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.655477 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.658545 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.733673 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e908068-9838-4185-975b-b705fa9a9e3a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.733778 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e908068-9838-4185-975b-b705fa9a9e3a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.733855 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e908068-9838-4185-975b-b705fa9a9e3a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.733949 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e908068-9838-4185-975b-b705fa9a9e3a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.734176 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e908068-9838-4185-975b-b705fa9a9e3a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.808804 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.808948 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:07 crc kubenswrapper[4826]: E0131 07:38:07.809074 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:07 crc kubenswrapper[4826]: E0131 07:38:07.809245 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.831261 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:22:09.913009455 +0000 UTC Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.831331 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.835067 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e908068-9838-4185-975b-b705fa9a9e3a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.835179 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e908068-9838-4185-975b-b705fa9a9e3a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.835227 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e908068-9838-4185-975b-b705fa9a9e3a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.835262 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e908068-9838-4185-975b-b705fa9a9e3a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.835309 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e908068-9838-4185-975b-b705fa9a9e3a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.835964 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e908068-9838-4185-975b-b705fa9a9e3a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.836079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e908068-9838-4185-975b-b705fa9a9e3a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.838186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e908068-9838-4185-975b-b705fa9a9e3a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.844379 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e908068-9838-4185-975b-b705fa9a9e3a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.845321 4826 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.858276 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e908068-9838-4185-975b-b705fa9a9e3a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dpklk\" (UID: \"2e908068-9838-4185-975b-b705fa9a9e3a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:07 crc kubenswrapper[4826]: I0131 07:38:07.979720 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" Jan 31 07:38:08 crc kubenswrapper[4826]: I0131 07:38:08.488344 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" event={"ID":"2e908068-9838-4185-975b-b705fa9a9e3a","Type":"ContainerStarted","Data":"131cc9d6d4953faaf87747702dd70d75e2cc1b8aa3f414b6189d9d807fe8b388"} Jan 31 07:38:08 crc kubenswrapper[4826]: I0131 07:38:08.488445 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" event={"ID":"2e908068-9838-4185-975b-b705fa9a9e3a","Type":"ContainerStarted","Data":"3e7750c3c581f934c16e442ccc788c0e489dc49b26d39c82392a65e03f2d2e9d"} Jan 31 07:38:08 crc kubenswrapper[4826]: I0131 07:38:08.514229 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dpklk" podStartSLOduration=99.514202662 podStartE2EDuration="1m39.514202662s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:08.509820846 +0000 UTC m=+120.363707245" watchObservedRunningTime="2026-01-31 07:38:08.514202662 +0000 UTC m=+120.368089021" Jan 31 07:38:08 crc kubenswrapper[4826]: E0131 07:38:08.801900 4826 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 07:38:08 crc kubenswrapper[4826]: I0131 07:38:08.809215 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:08 crc kubenswrapper[4826]: E0131 07:38:08.811568 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:08 crc kubenswrapper[4826]: I0131 07:38:08.811622 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:08 crc kubenswrapper[4826]: E0131 07:38:08.811868 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:08 crc kubenswrapper[4826]: E0131 07:38:08.912005 4826 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:38:09 crc kubenswrapper[4826]: I0131 07:38:09.808786 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:09 crc kubenswrapper[4826]: I0131 07:38:09.808810 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:09 crc kubenswrapper[4826]: E0131 07:38:09.808951 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:09 crc kubenswrapper[4826]: E0131 07:38:09.809053 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:10 crc kubenswrapper[4826]: I0131 07:38:10.808395 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:10 crc kubenswrapper[4826]: I0131 07:38:10.808415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:10 crc kubenswrapper[4826]: E0131 07:38:10.808629 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:10 crc kubenswrapper[4826]: E0131 07:38:10.808691 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:11 crc kubenswrapper[4826]: I0131 07:38:11.808458 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:11 crc kubenswrapper[4826]: I0131 07:38:11.808473 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:11 crc kubenswrapper[4826]: E0131 07:38:11.808758 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:11 crc kubenswrapper[4826]: E0131 07:38:11.808873 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:12 crc kubenswrapper[4826]: I0131 07:38:12.808350 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:12 crc kubenswrapper[4826]: I0131 07:38:12.808566 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:12 crc kubenswrapper[4826]: E0131 07:38:12.808637 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:12 crc kubenswrapper[4826]: E0131 07:38:12.808829 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:13 crc kubenswrapper[4826]: I0131 07:38:13.808142 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:13 crc kubenswrapper[4826]: I0131 07:38:13.808233 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:13 crc kubenswrapper[4826]: E0131 07:38:13.808343 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:13 crc kubenswrapper[4826]: E0131 07:38:13.808474 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:13 crc kubenswrapper[4826]: E0131 07:38:13.913327 4826 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:38:14 crc kubenswrapper[4826]: I0131 07:38:14.809370 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:14 crc kubenswrapper[4826]: I0131 07:38:14.809380 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:14 crc kubenswrapper[4826]: E0131 07:38:14.809644 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:14 crc kubenswrapper[4826]: I0131 07:38:14.809395 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:38:14 crc kubenswrapper[4826]: E0131 07:38:14.809725 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.521936 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/3.log" Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.525921 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerStarted","Data":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.526496 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.564346 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podStartSLOduration=105.564321042 podStartE2EDuration="1m45.564321042s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:15.563940751 +0000 UTC m=+127.417827160" watchObservedRunningTime="2026-01-31 07:38:15.564321042 +0000 UTC m=+127.418207401" Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.704582 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qrw7j"] Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.704813 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:15 crc kubenswrapper[4826]: E0131 07:38:15.705047 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:15 crc kubenswrapper[4826]: I0131 07:38:15.808205 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:15 crc kubenswrapper[4826]: E0131 07:38:15.808413 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:16 crc kubenswrapper[4826]: I0131 07:38:16.808331 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:16 crc kubenswrapper[4826]: I0131 07:38:16.808406 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:16 crc kubenswrapper[4826]: E0131 07:38:16.808630 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:16 crc kubenswrapper[4826]: E0131 07:38:16.808784 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:17 crc kubenswrapper[4826]: I0131 07:38:17.808426 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:17 crc kubenswrapper[4826]: I0131 07:38:17.808501 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:17 crc kubenswrapper[4826]: E0131 07:38:17.808840 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:17 crc kubenswrapper[4826]: E0131 07:38:17.809065 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:18 crc kubenswrapper[4826]: I0131 07:38:18.808861 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:18 crc kubenswrapper[4826]: I0131 07:38:18.808905 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:18 crc kubenswrapper[4826]: I0131 07:38:18.810126 4826 scope.go:117] "RemoveContainer" containerID="0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815" Jan 31 07:38:18 crc kubenswrapper[4826]: E0131 07:38:18.810134 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:18 crc kubenswrapper[4826]: E0131 07:38:18.810411 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:18 crc kubenswrapper[4826]: E0131 07:38:18.914160 4826 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 07:38:19 crc kubenswrapper[4826]: I0131 07:38:19.545560 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/1.log" Jan 31 07:38:19 crc kubenswrapper[4826]: I0131 07:38:19.545645 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerStarted","Data":"382e0d20cd9d87cb099d8ef02da0a340a24566c6c856081d606e0ba9fa917d2c"} Jan 31 07:38:19 crc kubenswrapper[4826]: I0131 07:38:19.580101 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wtbb9" podStartSLOduration=110.58007815 podStartE2EDuration="1m50.58007815s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:37:59.211326963 +0000 UTC m=+111.065213322" watchObservedRunningTime="2026-01-31 07:38:19.58007815 +0000 UTC m=+131.433964549" Jan 31 07:38:19 crc kubenswrapper[4826]: I0131 07:38:19.808184 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:19 crc kubenswrapper[4826]: I0131 07:38:19.808323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:19 crc kubenswrapper[4826]: E0131 07:38:19.808365 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:19 crc kubenswrapper[4826]: E0131 07:38:19.808614 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:20 crc kubenswrapper[4826]: I0131 07:38:20.808894 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:20 crc kubenswrapper[4826]: I0131 07:38:20.808953 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:20 crc kubenswrapper[4826]: E0131 07:38:20.809118 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:20 crc kubenswrapper[4826]: E0131 07:38:20.809242 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:21 crc kubenswrapper[4826]: I0131 07:38:21.807824 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:21 crc kubenswrapper[4826]: I0131 07:38:21.807905 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:21 crc kubenswrapper[4826]: E0131 07:38:21.807954 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:21 crc kubenswrapper[4826]: E0131 07:38:21.808116 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:22 crc kubenswrapper[4826]: I0131 07:38:22.808053 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:22 crc kubenswrapper[4826]: I0131 07:38:22.808100 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:22 crc kubenswrapper[4826]: E0131 07:38:22.808257 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 07:38:22 crc kubenswrapper[4826]: E0131 07:38:22.808425 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 07:38:23 crc kubenswrapper[4826]: I0131 07:38:23.808878 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:23 crc kubenswrapper[4826]: I0131 07:38:23.808921 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:23 crc kubenswrapper[4826]: E0131 07:38:23.809784 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrw7j" podUID="251ad51e-c383-4684-bfdb-2b9ce8098cc6" Jan 31 07:38:23 crc kubenswrapper[4826]: E0131 07:38:23.809856 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 07:38:24 crc kubenswrapper[4826]: I0131 07:38:24.807954 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:24 crc kubenswrapper[4826]: I0131 07:38:24.808020 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:24 crc kubenswrapper[4826]: I0131 07:38:24.810821 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 07:38:24 crc kubenswrapper[4826]: I0131 07:38:24.812342 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 07:38:25 crc kubenswrapper[4826]: I0131 07:38:25.808513 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:25 crc kubenswrapper[4826]: I0131 07:38:25.808863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:25 crc kubenswrapper[4826]: I0131 07:38:25.811581 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 07:38:25 crc kubenswrapper[4826]: I0131 07:38:25.811642 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 07:38:25 crc kubenswrapper[4826]: I0131 07:38:25.813488 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 07:38:25 crc kubenswrapper[4826]: I0131 07:38:25.816997 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 07:38:27 crc kubenswrapper[4826]: I0131 07:38:27.982553 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.034421 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9x5mr"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.035330 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.041603 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v9kf9"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.042239 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.044008 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.046041 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.046762 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.048601 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.049268 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: W0131 07:38:28.049295 4826 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:38:28 crc kubenswrapper[4826]: E0131 07:38:28.049364 4826 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:38:28 crc kubenswrapper[4826]: W0131 07:38:28.049468 4826 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:38:28 crc kubenswrapper[4826]: E0131 07:38:28.049499 4826 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:38:28 crc kubenswrapper[4826]: W0131 07:38:28.049599 4826 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:38:28 crc kubenswrapper[4826]: W0131 07:38:28.049634 4826 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 07:38:28 crc kubenswrapper[4826]: E0131 07:38:28.049700 4826 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:38:28 crc kubenswrapper[4826]: E0131 07:38:28.049633 4826 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.049757 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.054820 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ngf5x"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.056145 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.063728 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.064302 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.064382 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.064510 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.065915 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.066058 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.066438 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.067207 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.067352 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.067537 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.067858 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.068485 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.070175 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.070412 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.070626 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.070827 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.071178 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.073046 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.073838 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.074699 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fbprj"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.075738 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.083478 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.083549 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.083913 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.086207 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cgwgw"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.086994 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.090139 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.090775 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.091429 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bp2zc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.091447 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.091832 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.092144 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.092487 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.092802 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.092876 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.107161 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.108043 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.113233 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.113701 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.115184 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.115402 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.117358 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.118453 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.123443 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.123897 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.124148 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.124815 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.125163 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.125430 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.125620 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.125957 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.126027 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.126050 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.126506 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.126682 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.132483 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.156502 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.157374 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.157823 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.158094 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.158113 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.158359 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.159729 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.160586 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.160748 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.160874 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161038 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161180 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161325 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161452 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161592 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161721 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161847 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.161959 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.162242 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.162362 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.162478 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.162566 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.162738 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.162822 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.164594 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.164868 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166052 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166232 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166609 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166785 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166877 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166961 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.166986 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.167079 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.167576 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.167692 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.167786 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.167897 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.172077 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.173542 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7bmfc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.174127 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.174205 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmkrm"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.174931 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.176144 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5ns9w"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.176955 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185068 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk5r\" (UniqueName: \"kubernetes.io/projected/84e4d149-e92b-4d41-8fdd-0c831d554a94-kube-api-access-2fk5r\") pod \"cluster-samples-operator-665b6dd947-w9vbd\" (UID: \"84e4d149-e92b-4d41-8fdd-0c831d554a94\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185118 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b64f74-7aa6-459c-be6d-f6f8966b456f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185141 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7d6dffb0-9e73-4f02-ab48-f04555936387-machine-approver-tls\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185162 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-audit-policies\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185205 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/73f5be70-5ac7-4585-b5a3-e75c5f766822-node-pullsecrets\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185224 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dst4\" (UniqueName: \"kubernetes.io/projected/138c4519-aea9-40d6-a633-afe9fe199a6d-kube-api-access-9dst4\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185240 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9c2l\" (UniqueName: \"kubernetes.io/projected/e71acdff-33d6-4052-907b-8e38ac391f58-kube-api-access-t9c2l\") pod \"downloads-7954f5f757-bp2zc\" (UID: \"e71acdff-33d6-4052-907b-8e38ac391f58\") " pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-config\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185283 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-service-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185303 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhsnj\" (UniqueName: \"kubernetes.io/projected/bab94136-ad0f-4379-a76a-5acb66335175-kube-api-access-lhsnj\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185327 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138c4519-aea9-40d6-a633-afe9fe199a6d-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185371 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-audit\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185392 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185411 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f5be70-5ac7-4585-b5a3-e75c5f766822-audit-dir\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185432 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185455 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-serving-cert\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185474 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-serving-cert\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-config\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185520 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2v2t\" (UniqueName: \"kubernetes.io/projected/f2b64f74-7aa6-459c-be6d-f6f8966b456f-kube-api-access-p2v2t\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185540 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185560 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-encryption-config\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185578 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185596 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185615 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4566v\" (UniqueName: \"kubernetes.io/projected/7d6dffb0-9e73-4f02-ab48-f04555936387-kube-api-access-4566v\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185631 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185650 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-serving-cert\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8n2\" (UniqueName: \"kubernetes.io/projected/f8a24898-167c-483a-9d54-7412fb063199-kube-api-access-4j8n2\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185693 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d6dffb0-9e73-4f02-ab48-f04555936387-auth-proxy-config\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185715 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab94136-ad0f-4379-a76a-5acb66335175-serving-cert\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185736 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-image-import-ca\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185760 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-audit-policies\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185786 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185808 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/84e4d149-e92b-4d41-8fdd-0c831d554a94-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w9vbd\" (UID: \"84e4d149-e92b-4d41-8fdd-0c831d554a94\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185829 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185850 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-config\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.185955 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b64f74-7aa6-459c-be6d-f6f8966b456f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186008 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-images\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186032 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186067 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186116 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2z8t\" (UniqueName: \"kubernetes.io/projected/73f5be70-5ac7-4585-b5a3-e75c5f766822-kube-api-access-r2z8t\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186145 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8a24898-167c-483a-9d54-7412fb063199-audit-dir\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186164 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52nkr\" (UniqueName: \"kubernetes.io/projected/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-kube-api-access-52nkr\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186254 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69a0b5c3-55a0-4fe5-866d-660e663e5112-audit-dir\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186320 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186337 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-client-ca\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186357 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-config\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186575 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-etcd-client\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186591 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-etcd-serving-ca\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186607 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186628 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186649 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-etcd-client\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186705 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186729 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186762 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-encryption-config\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5l9\" (UniqueName: \"kubernetes.io/projected/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-kube-api-access-8f5l9\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186818 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6dffb0-9e73-4f02-ab48-f04555936387-config\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186838 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-client-ca\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186886 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-config\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186909 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swpc5\" (UniqueName: \"kubernetes.io/projected/69a0b5c3-55a0-4fe5-866d-660e663e5112-kube-api-access-swpc5\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-config\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.186980 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.192755 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.194138 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.194349 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.198706 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bl2g9"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.199273 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.199636 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hkw8j"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.200031 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.200293 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.200428 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.203726 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9x5mr"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.203775 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.204747 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.206601 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.232667 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.233219 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.235431 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.242252 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ngf5x"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.245366 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.245908 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.250456 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.250184 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.250993 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.251193 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.251356 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.251537 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.251708 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.251847 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252021 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252140 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252289 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.251238 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252568 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252947 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252154 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.253444 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.252396 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.253684 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.253457 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.254064 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.254587 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.254890 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.280350 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.280585 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.281261 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwd92"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.281591 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-976l6"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.281978 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.282312 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.282447 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289647 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-encryption-config\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-ca\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289708 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5l9\" (UniqueName: \"kubernetes.io/projected/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-kube-api-access-8f5l9\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289731 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6dffb0-9e73-4f02-ab48-f04555936387-config\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-client-ca\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-config\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289783 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swpc5\" (UniqueName: \"kubernetes.io/projected/69a0b5c3-55a0-4fe5-866d-660e663e5112-kube-api-access-swpc5\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289801 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-default-certificate\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289832 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-config\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289869 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7d6dffb0-9e73-4f02-ab48-f04555936387-machine-approver-tls\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289900 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-audit-policies\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289918 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk5r\" (UniqueName: \"kubernetes.io/projected/84e4d149-e92b-4d41-8fdd-0c831d554a94-kube-api-access-2fk5r\") pod \"cluster-samples-operator-665b6dd947-w9vbd\" (UID: \"84e4d149-e92b-4d41-8fdd-0c831d554a94\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b64f74-7aa6-459c-be6d-f6f8966b456f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.289978 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dst4\" (UniqueName: \"kubernetes.io/projected/138c4519-aea9-40d6-a633-afe9fe199a6d-kube-api-access-9dst4\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290008 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9c2l\" (UniqueName: \"kubernetes.io/projected/e71acdff-33d6-4052-907b-8e38ac391f58-kube-api-access-t9c2l\") pod \"downloads-7954f5f757-bp2zc\" (UID: \"e71acdff-33d6-4052-907b-8e38ac391f58\") " pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290043 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/73f5be70-5ac7-4585-b5a3-e75c5f766822-node-pullsecrets\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290061 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhsnj\" (UniqueName: \"kubernetes.io/projected/bab94136-ad0f-4379-a76a-5acb66335175-kube-api-access-lhsnj\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290079 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-config\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290112 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-service-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290136 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290154 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-stats-auth\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290173 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138c4519-aea9-40d6-a633-afe9fe199a6d-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290193 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290212 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-audit\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290229 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255c8988-162e-4ae1-982f-e45cde006077-service-ca-bundle\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290283 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290302 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-trusted-ca\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290324 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290345 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-serving-cert\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f5be70-5ac7-4585-b5a3-e75c5f766822-audit-dir\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-serving-cert\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-metrics-tls\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290440 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb462\" (UniqueName: \"kubernetes.io/projected/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-kube-api-access-fb462\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290459 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-config\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290477 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2v2t\" (UniqueName: \"kubernetes.io/projected/f2b64f74-7aa6-459c-be6d-f6f8966b456f-kube-api-access-p2v2t\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290495 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290511 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-encryption-config\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290532 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290550 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4566v\" (UniqueName: \"kubernetes.io/projected/7d6dffb0-9e73-4f02-ab48-f04555936387-kube-api-access-4566v\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290566 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290585 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab94136-ad0f-4379-a76a-5acb66335175-serving-cert\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290604 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-serving-cert\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j8n2\" (UniqueName: \"kubernetes.io/projected/f8a24898-167c-483a-9d54-7412fb063199-kube-api-access-4j8n2\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290642 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d6dffb0-9e73-4f02-ab48-f04555936387-auth-proxy-config\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290660 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-image-import-ca\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290679 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-audit-policies\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290703 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/84e4d149-e92b-4d41-8fdd-0c831d554a94-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w9vbd\" (UID: \"84e4d149-e92b-4d41-8fdd-0c831d554a94\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290739 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290756 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-config\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290773 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b64f74-7aa6-459c-be6d-f6f8966b456f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290795 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqrq\" (UniqueName: \"kubernetes.io/projected/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-kube-api-access-2lqrq\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290814 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-metrics-certs\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290832 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290848 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-serving-cert\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290864 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-images\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290880 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290896 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-client\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290924 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2z8t\" (UniqueName: \"kubernetes.io/projected/73f5be70-5ac7-4585-b5a3-e75c5f766822-kube-api-access-r2z8t\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290942 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8a24898-167c-483a-9d54-7412fb063199-audit-dir\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290975 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.290995 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-service-ca\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291022 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-config\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291042 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52nkr\" (UniqueName: \"kubernetes.io/projected/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-kube-api-access-52nkr\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69a0b5c3-55a0-4fe5-866d-660e663e5112-audit-dir\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291077 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkdgn\" (UniqueName: \"kubernetes.io/projected/255c8988-162e-4ae1-982f-e45cde006077-kube-api-access-wkdgn\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291096 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291115 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291133 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-client-ca\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-config\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291171 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291188 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-etcd-client\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291207 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-etcd-client\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291223 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-etcd-serving-ca\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291239 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.291458 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.293164 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-config\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.296447 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.297462 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.297653 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8a24898-167c-483a-9d54-7412fb063199-audit-dir\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.301785 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.302680 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-encryption-config\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.302910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.303734 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6dffb0-9e73-4f02-ab48-f04555936387-config\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.304420 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-config\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.305201 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.307408 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-client-ca\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.307715 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.311787 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.313238 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b64f74-7aa6-459c-be6d-f6f8966b456f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.313440 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-audit\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.313757 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-config\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.314278 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-audit-policies\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.314677 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-etcd-serving-ca\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.314815 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.315083 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-config\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.315778 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-config\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.316360 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69a0b5c3-55a0-4fe5-866d-660e663e5112-audit-dir\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.319690 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-images\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.320640 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d6dffb0-9e73-4f02-ab48-f04555936387-auth-proxy-config\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.321782 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.321843 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-client-ca\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.321901 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hmc8k"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.321929 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/73f5be70-5ac7-4585-b5a3-e75c5f766822-node-pullsecrets\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.322054 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f5be70-5ac7-4585-b5a3-e75c5f766822-audit-dir\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.322618 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.322647 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xz6vc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.323915 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/73f5be70-5ac7-4585-b5a3-e75c5f766822-image-import-ca\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.324222 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-audit-policies\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.324838 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.325098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-config\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.325655 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/69a0b5c3-55a0-4fe5-866d-660e663e5112-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.325886 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.325916 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-serving-cert\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.322728 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.326198 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.326465 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.326469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.326649 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.326261 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.327380 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7bpv5"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.327546 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.327732 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.327763 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/84e4d149-e92b-4d41-8fdd-0c831d554a94-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w9vbd\" (UID: \"84e4d149-e92b-4d41-8fdd-0c831d554a94\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.328032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138c4519-aea9-40d6-a633-afe9fe199a6d-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.328424 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-encryption-config\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.328583 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.329222 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.329527 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.330210 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.330360 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.331127 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.331190 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.332365 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v9kf9"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.332493 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.332607 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.333258 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.333306 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.334342 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.334456 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.335487 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bp2zc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.336471 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-etcd-client\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.337491 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/73f5be70-5ac7-4585-b5a3-e75c5f766822-etcd-client\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.337573 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.337902 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.338704 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cgwgw"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.339672 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-p6c7j"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.340640 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.340645 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.340644 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b64f74-7aa6-459c-be6d-f6f8966b456f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.341282 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab94136-ad0f-4379-a76a-5acb66335175-serving-cert\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.341513 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-serving-cert\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.341553 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8dst8"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.341693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7d6dffb0-9e73-4f02-ab48-f04555936387-machine-approver-tls\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.342248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.342507 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.344456 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.345983 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a0b5c3-55a0-4fe5-866d-660e663e5112-serving-cert\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.346262 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.354955 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.356945 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fbprj"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.358752 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.360293 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmkrm"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.360832 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.361813 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5ns9w"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.372085 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hkw8j"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.373645 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.379767 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.385522 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.387087 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.389959 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392707 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb462\" (UniqueName: \"kubernetes.io/projected/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-kube-api-access-fb462\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392753 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392814 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lqrq\" (UniqueName: \"kubernetes.io/projected/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-kube-api-access-2lqrq\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392854 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-metrics-certs\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-serving-cert\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392909 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-client\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.392955 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-service-ca\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393057 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-config\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393117 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkdgn\" (UniqueName: \"kubernetes.io/projected/255c8988-162e-4ae1-982f-e45cde006077-kube-api-access-wkdgn\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393153 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-ca\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393202 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-default-certificate\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393324 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393350 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-stats-auth\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393378 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255c8988-162e-4ae1-982f-e45cde006077-service-ca-bundle\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-trusted-ca\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.393433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-metrics-tls\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.394747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.395163 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hmc8k"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.396040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-trusted-ca\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.396234 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7bpv5"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.396773 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255c8988-162e-4ae1-982f-e45cde006077-service-ca-bundle\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.396800 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-metrics-certs\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.397636 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.399077 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.399555 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-metrics-tls\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.400040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-stats-auth\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.400330 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.400500 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/255c8988-162e-4ae1-982f-e45cde006077-default-certificate\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.401395 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.402559 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lrcf5"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.403366 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.403833 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.405039 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwd92"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.407722 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.410029 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-976l6"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.411530 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p6c7j"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.413249 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.414596 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bl2g9"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.415820 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.417225 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.418633 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xz6vc"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.419998 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lrcf5"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.421793 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.422403 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.423462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.423484 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.424581 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wl98f"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.425748 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.425808 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wl98f"] Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.437505 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.458224 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.479134 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.498498 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.518223 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.539290 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.558206 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.564194 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-config\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.578766 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.588122 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-serving-cert\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.599359 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.619466 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.628506 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-client\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.638570 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.644802 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-ca\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.659724 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.678643 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.684402 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-etcd-service-ca\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.698073 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.718824 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.738810 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.758900 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.778685 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.799878 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.859713 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.879101 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.899767 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.918738 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.939163 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.959320 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.980611 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 07:38:28 crc kubenswrapper[4826]: I0131 07:38:28.998925 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.018217 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.038893 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.059025 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.077333 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.098936 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.117552 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.138430 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.159628 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.178085 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.199464 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.217815 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.238601 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.265654 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.293479 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2z8t\" (UniqueName: \"kubernetes.io/projected/73f5be70-5ac7-4585-b5a3-e75c5f766822-kube-api-access-r2z8t\") pod \"apiserver-76f77b778f-fbprj\" (UID: \"73f5be70-5ac7-4585-b5a3-e75c5f766822\") " pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:29 crc kubenswrapper[4826]: E0131 07:38:29.309563 4826 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:29 crc kubenswrapper[4826]: E0131 07:38:29.310083 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-service-ca-bundle podName:bab94136-ad0f-4379-a76a-5acb66335175 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:29.80999723 +0000 UTC m=+141.663883669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-service-ca-bundle") pod "authentication-operator-69f744f599-v9kf9" (UID: "bab94136-ad0f-4379-a76a-5acb66335175") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.316164 4826 request.go:700] Waited for 1.002108309s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 31 07:38:29 crc kubenswrapper[4826]: E0131 07:38:29.322210 4826 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:29 crc kubenswrapper[4826]: E0131 07:38:29.322626 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-trusted-ca-bundle podName:bab94136-ad0f-4379-a76a-5acb66335175 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:29.822586924 +0000 UTC m=+141.676473323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-trusted-ca-bundle") pod "authentication-operator-69f744f599-v9kf9" (UID: "bab94136-ad0f-4379-a76a-5acb66335175") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.327824 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5l9\" (UniqueName: \"kubernetes.io/projected/13fd6a2b-6076-4bd6-8bc2-466b802bdde4-kube-api-access-8f5l9\") pod \"machine-api-operator-5694c8668f-ngf5x\" (UID: \"13fd6a2b-6076-4bd6-8bc2-466b802bdde4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.334989 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52nkr\" (UniqueName: \"kubernetes.io/projected/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-kube-api-access-52nkr\") pod \"controller-manager-879f6c89f-9x5mr\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.358166 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4566v\" (UniqueName: \"kubernetes.io/projected/7d6dffb0-9e73-4f02-ab48-f04555936387-kube-api-access-4566v\") pod \"machine-approver-56656f9798-xfxnd\" (UID: \"7d6dffb0-9e73-4f02-ab48-f04555936387\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.364920 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.394958 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swpc5\" (UniqueName: \"kubernetes.io/projected/69a0b5c3-55a0-4fe5-866d-660e663e5112-kube-api-access-swpc5\") pod \"apiserver-7bbb656c7d-9qt4f\" (UID: \"69a0b5c3-55a0-4fe5-866d-660e663e5112\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.398018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ec09e55-a5b7-4aed-af1b-d36fc00592d3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8vz7r\" (UID: \"0ec09e55-a5b7-4aed-af1b-d36fc00592d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.398496 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.402068 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.413219 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j8n2\" (UniqueName: \"kubernetes.io/projected/f8a24898-167c-483a-9d54-7412fb063199-kube-api-access-4j8n2\") pod \"oauth-openshift-558db77b4-cgwgw\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:29 crc kubenswrapper[4826]: W0131 07:38:29.418169 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d6dffb0_9e73_4f02_ab48_f04555936387.slice/crio-a0b42ba3456c69719278ef31b33a84c3e8119029a4e096b6760ae16c6fa10151 WatchSource:0}: Error finding container a0b42ba3456c69719278ef31b33a84c3e8119029a4e096b6760ae16c6fa10151: Status 404 returned error can't find the container with id a0b42ba3456c69719278ef31b33a84c3e8119029a4e096b6760ae16c6fa10151 Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.440486 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk5r\" (UniqueName: \"kubernetes.io/projected/84e4d149-e92b-4d41-8fdd-0c831d554a94-kube-api-access-2fk5r\") pod \"cluster-samples-operator-665b6dd947-w9vbd\" (UID: \"84e4d149-e92b-4d41-8fdd-0c831d554a94\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.453918 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.473426 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9c2l\" (UniqueName: \"kubernetes.io/projected/e71acdff-33d6-4052-907b-8e38ac391f58-kube-api-access-t9c2l\") pod \"downloads-7954f5f757-bp2zc\" (UID: \"e71acdff-33d6-4052-907b-8e38ac391f58\") " pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.481039 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dst4\" (UniqueName: \"kubernetes.io/projected/138c4519-aea9-40d6-a633-afe9fe199a6d-kube-api-access-9dst4\") pod \"route-controller-manager-6576b87f9c-xvkpm\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.497390 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.518334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2v2t\" (UniqueName: \"kubernetes.io/projected/f2b64f74-7aa6-459c-be6d-f6f8966b456f-kube-api-access-p2v2t\") pod \"openshift-apiserver-operator-796bbdcf4f-g94hs\" (UID: \"f2b64f74-7aa6-459c-be6d-f6f8966b456f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.518749 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.540066 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.559908 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.573374 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.578369 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.578520 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.588978 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.593189 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.595406 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ngf5x"] Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.603577 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.610469 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" event={"ID":"7d6dffb0-9e73-4f02-ab48-f04555936387","Type":"ContainerStarted","Data":"a0b42ba3456c69719278ef31b33a84c3e8119029a4e096b6760ae16c6fa10151"} Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.623531 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.642889 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.648651 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fbprj"] Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.659153 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 07:38:29 crc kubenswrapper[4826]: W0131 07:38:29.671592 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f5be70_5ac7_4585_b5a3_e75c5f766822.slice/crio-e5fec1db5ed77b8734574c4662deea6c10ff5ac6c8d02163e01a4e7e42bf2893 WatchSource:0}: Error finding container e5fec1db5ed77b8734574c4662deea6c10ff5ac6c8d02163e01a4e7e42bf2893: Status 404 returned error can't find the container with id e5fec1db5ed77b8734574c4662deea6c10ff5ac6c8d02163e01a4e7e42bf2893 Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.675662 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.678444 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.689771 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.697813 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.706125 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cgwgw"] Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.731536 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.738462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.759193 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.781074 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.798764 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.818129 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.819539 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-service-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.839637 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.873077 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.880394 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.903505 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.919208 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.920452 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.924953 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm"] Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.938039 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.957272 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.977974 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 07:38:29 crc kubenswrapper[4826]: I0131 07:38:29.997210 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.017748 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.037714 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.057231 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.077838 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.097332 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.117493 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.137282 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.165816 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.178296 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.193837 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bp2zc"] Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.195438 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod138c4519_aea9_40d6_a633_afe9fe199a6d.slice/crio-eea64576e652fab654ec0b77d67f397aa06219897a2a0be7051fe57f05feb904 WatchSource:0}: Error finding container eea64576e652fab654ec0b77d67f397aa06219897a2a0be7051fe57f05feb904: Status 404 returned error can't find the container with id eea64576e652fab654ec0b77d67f397aa06219897a2a0be7051fe57f05feb904 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.199579 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.205575 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd"] Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.209657 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9x5mr"] Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.210675 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r"] Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.217023 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode71acdff_33d6_4052_907b_8e38ac391f58.slice/crio-85eb30845c7db18c0f30d35bc270fe296bdbab1da275b6cd4810d28486ca7cf7 WatchSource:0}: Error finding container 85eb30845c7db18c0f30d35bc270fe296bdbab1da275b6cd4810d28486ca7cf7: Status 404 returned error can't find the container with id 85eb30845c7db18c0f30d35bc270fe296bdbab1da275b6cd4810d28486ca7cf7 Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.242074 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ba35eb2_6b7b_45bc_827a_b7c3b5266073.slice/crio-4683446f94428df96de00e6ad956e4b3d8e8defec318042d4e9e721904fe29fa WatchSource:0}: Error finding container 4683446f94428df96de00e6ad956e4b3d8e8defec318042d4e9e721904fe29fa: Status 404 returned error can't find the container with id 4683446f94428df96de00e6ad956e4b3d8e8defec318042d4e9e721904fe29fa Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.244703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb462\" (UniqueName: \"kubernetes.io/projected/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-kube-api-access-fb462\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.260497 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lqrq\" (UniqueName: \"kubernetes.io/projected/4e48db4e-7a7e-4956-8a51-08cac7cfbf7c-kube-api-access-2lqrq\") pod \"etcd-operator-b45778765-bl2g9\" (UID: \"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.261997 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.275172 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkdgn\" (UniqueName: \"kubernetes.io/projected/255c8988-162e-4ae1-982f-e45cde006077-kube-api-access-wkdgn\") pod \"router-default-5444994796-7bmfc\" (UID: \"255c8988-162e-4ae1-982f-e45cde006077\") " pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.297567 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nk42r\" (UID: \"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.298628 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.321099 4826 request.go:700] Waited for 1.915801272s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.323398 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.332731 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f"] Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.338951 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.350319 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a0b5c3_55a0_4fe5_866d_660e663e5112.slice/crio-3d6962a47ecc6245e100e63566bb60acd476c527c0efeecdb9319612bf8820f2 WatchSource:0}: Error finding container 3d6962a47ecc6245e100e63566bb60acd476c527c0efeecdb9319612bf8820f2: Status 404 returned error can't find the container with id 3d6962a47ecc6245e100e63566bb60acd476c527c0efeecdb9319612bf8820f2 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.358407 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.378485 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.398468 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.420087 4826 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.472638 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs"] Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.485434 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bl2g9"] Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.486033 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.492482 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.497412 4826 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.497465 4826 projected.go:194] Error preparing data for projected volume kube-api-access-lhsnj for pod openshift-authentication-operator/authentication-operator-69f744f599-v9kf9: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.497540 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bab94136-ad0f-4379-a76a-5acb66335175-kube-api-access-lhsnj podName:bab94136-ad0f-4379-a76a-5acb66335175 nodeName:}" failed. No retries permitted until 2026-01-31 07:38:30.997515905 +0000 UTC m=+142.851402264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lhsnj" (UniqueName: "kubernetes.io/projected/bab94136-ad0f-4379-a76a-5acb66335175-kube-api-access-lhsnj") pod "authentication-operator-69f744f599-v9kf9" (UID: "bab94136-ad0f-4379-a76a-5acb66335175") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.498335 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.500400 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2b64f74_7aa6_459c_be6d_f6f8966b456f.slice/crio-7bce7234f7527332d350d082d70ee9b16eb8430a0fdf1b8d5e890bebc75e0670 WatchSource:0}: Error finding container 7bce7234f7527332d350d082d70ee9b16eb8430a0fdf1b8d5e890bebc75e0670: Status 404 returned error can't find the container with id 7bce7234f7527332d350d082d70ee9b16eb8430a0fdf1b8d5e890bebc75e0670 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.516773 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.520890 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.528819 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lqd\" (UniqueName: \"kubernetes.io/projected/1482e43a-84a4-42ed-a605-37cc519dd5ef-kube-api-access-28lqd\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.528886 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-registry-certificates\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529061 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529168 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkncf\" (UniqueName: \"kubernetes.io/projected/26cae060-918d-4158-9f68-8d3a47dbd237-kube-api-access-xkncf\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-trusted-ca\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15e4501-1c69-4a71-9ca0-440051530c26-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529297 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cae060-918d-4158-9f68-8d3a47dbd237-proxy-tls\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529323 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54cc2edc-8881-4459-be91-a4d9536d6b7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-config\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.529387 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-serving-cert\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533348 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533762 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b15e4501-1c69-4a71-9ca0-440051530c26-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533847 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26cae060-918d-4158-9f68-8d3a47dbd237-images\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533875 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b15e4501-1c69-4a71-9ca0-440051530c26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533895 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-service-ca\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533949 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d37f6dbf-56d4-46b2-8808-31999002461b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.533980 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kxd\" (UniqueName: \"kubernetes.io/projected/c702dc52-bad4-47e0-a9cf-5091595186a3-kube-api-access-g4kxd\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534058 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-bound-sa-token\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534076 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cc2edc-8881-4459-be91-a4d9536d6b7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534131 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534149 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-kube-api-access-fwlsr\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534167 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c702dc52-bad4-47e0-a9cf-5091595186a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534209 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cae060-918d-4158-9f68-8d3a47dbd237-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534226 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8j6z\" (UniqueName: \"kubernetes.io/projected/bb86efdd-32bd-4a38-b343-6231b1d805f9-kube-api-access-x8j6z\") pod \"dns-operator-744455d44c-5ns9w\" (UID: \"bb86efdd-32bd-4a38-b343-6231b1d805f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534260 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgfd\" (UniqueName: \"kubernetes.io/projected/b15e4501-1c69-4a71-9ca0-440051530c26-kube-api-access-2qgfd\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534316 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb86efdd-32bd-4a38-b343-6231b1d805f9-metrics-tls\") pod \"dns-operator-744455d44c-5ns9w\" (UID: \"bb86efdd-32bd-4a38-b343-6231b1d805f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534332 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-trusted-ca-bundle\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-config\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534379 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-oauth-config\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534407 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54cc2edc-8881-4459-be91-a4d9536d6b7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534425 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-oauth-serving-cert\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534444 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d37f6dbf-56d4-46b2-8808-31999002461b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534460 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c702dc52-bad4-47e0-a9cf-5091595186a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.534488 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-registry-tls\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.536913 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.036899264 +0000 UTC m=+142.890785623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.539518 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.541171 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab94136-ad0f-4379-a76a-5acb66335175-service-ca-bundle\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.592561 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod255c8988_162e_4ae1_982f_e45cde006077.slice/crio-a26ef1ceab9b3c05c88ec85872ddd3339321979755ed8a65f6b269b691a1e218 WatchSource:0}: Error finding container a26ef1ceab9b3c05c88ec85872ddd3339321979755ed8a65f6b269b691a1e218: Status 404 returned error can't find the container with id a26ef1ceab9b3c05c88ec85872ddd3339321979755ed8a65f6b269b691a1e218 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.619453 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" event={"ID":"2ba35eb2-6b7b-45bc-827a-b7c3b5266073","Type":"ContainerStarted","Data":"58f5a857e9b00c52b31d7e28cbce2cce1678920e9484e4cdf390581d0bdcd2ce"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.620016 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" event={"ID":"2ba35eb2-6b7b-45bc-827a-b7c3b5266073","Type":"ContainerStarted","Data":"4683446f94428df96de00e6ad956e4b3d8e8defec318042d4e9e721904fe29fa"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.620045 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.622892 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" event={"ID":"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c","Type":"ContainerStarted","Data":"73dd56642081e06b4088d5d97286d831138fa020d0212dc148b0f20c3a84fb7f"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.623237 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9x5mr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.623314 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.625893 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" event={"ID":"84e4d149-e92b-4d41-8fdd-0c831d554a94","Type":"ContainerStarted","Data":"eb58194a43cf20944886cce455a3959a676a10c2425e36b5e5a2a91091a40766"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.636705 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.636940 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54cc2edc-8881-4459-be91-a4d9536d6b7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637005 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.637040 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.13700638 +0000 UTC m=+142.990892879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xn5w\" (UniqueName: \"kubernetes.io/projected/eb5f2299-431e-4001-a9f1-52049e2dce8d-kube-api-access-8xn5w\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637138 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-serving-cert\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637169 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-registration-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637196 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa36f8a-24fe-4315-b79a-7397d20938ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637322 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/961fdd97-d775-4034-91d6-02f865935a59-cert\") pod \"ingress-canary-lrcf5\" (UID: \"961fdd97-d775-4034-91d6-02f865935a59\") " pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637358 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlts2\" (UniqueName: \"kubernetes.io/projected/66616d78-82d1-4623-ab0f-ec0b7d44ed76-kube-api-access-mlts2\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqqc\" (UniqueName: \"kubernetes.io/projected/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-kube-api-access-9nqqc\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637455 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b15e4501-1c69-4a71-9ca0-440051530c26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637481 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637510 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d37f6dbf-56d4-46b2-8808-31999002461b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637695 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4l42\" (UniqueName: \"kubernetes.io/projected/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-kube-api-access-k4l42\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637723 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cc2edc-8881-4459-be91-a4d9536d6b7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637779 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-bound-sa-token\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7521f47-f937-4c7a-a556-e03df8257499-apiservice-cert\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c702dc52-bad4-47e0-a9cf-5091595186a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637865 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-srv-cert\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637897 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637939 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cae060-918d-4158-9f68-8d3a47dbd237-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.637980 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb5f2299-431e-4001-a9f1-52049e2dce8d-metrics-tls\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638007 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d71f46-e21e-46bb-92ce-10335bb7983a-config\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgfd\" (UniqueName: \"kubernetes.io/projected/b15e4501-1c69-4a71-9ca0-440051530c26-kube-api-access-2qgfd\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638060 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-mountpoint-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638082 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6599a11f-be2d-418d-9410-4f27a32db1ab-proxy-tls\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638111 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-trusted-ca-bundle\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638525 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cc2edc-8881-4459-be91-a4d9536d6b7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.638706 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cae060-918d-4158-9f68-8d3a47dbd237-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639006 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c702dc52-bad4-47e0-a9cf-5091595186a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639170 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-config\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639204 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z44h\" (UniqueName: \"kubernetes.io/projected/7ace0d3e-9818-4322-b306-f74e7d3fd5a1-kube-api-access-2z44h\") pod \"migrator-59844c95c7-xdl4h\" (UID: \"7ace0d3e-9818-4322-b306-f74e7d3fd5a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639255 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639294 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66616d78-82d1-4623-ab0f-ec0b7d44ed76-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639317 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-kube-api-access-jj4lr\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639349 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lqd\" (UniqueName: \"kubernetes.io/projected/1482e43a-84a4-42ed-a605-37cc519dd5ef-kube-api-access-28lqd\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639391 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e45f11-204e-4536-bd7b-76c9bdeee4c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-64lqw\" (UID: \"c8e45f11-204e-4536-bd7b-76c9bdeee4c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-registry-certificates\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639441 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639467 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kj2\" (UniqueName: \"kubernetes.io/projected/a2d22d2d-b77f-491b-956a-c3b36ae92dd1-kube-api-access-94kj2\") pod \"multus-admission-controller-857f4d67dd-7bpv5\" (UID: \"a2d22d2d-b77f-491b-956a-c3b36ae92dd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639490 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639516 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d35202e5-599d-4dae-b3db-0ae1a99416c2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-94k2w\" (UID: \"d35202e5-599d-4dae-b3db-0ae1a99416c2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639563 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkncf\" (UniqueName: \"kubernetes.io/projected/26cae060-918d-4158-9f68-8d3a47dbd237-kube-api-access-xkncf\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8ql\" (UniqueName: \"kubernetes.io/projected/d35202e5-599d-4dae-b3db-0ae1a99416c2-kube-api-access-6x8ql\") pod \"control-plane-machine-set-operator-78cbb6b69f-94k2w\" (UID: \"d35202e5-599d-4dae-b3db-0ae1a99416c2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639614 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa36f8a-24fe-4315-b79a-7397d20938ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639637 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5lc\" (UniqueName: \"kubernetes.io/projected/c4d71f46-e21e-46bb-92ce-10335bb7983a-kube-api-access-mz5lc\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639666 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cae060-918d-4158-9f68-8d3a47dbd237-proxy-tls\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639689 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzwtt\" (UniqueName: \"kubernetes.io/projected/2da6f17d-aeb0-4cc8-8f11-c99bea508129-kube-api-access-lzwtt\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639718 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-config\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639740 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4d71f46-e21e-46bb-92ce-10335bb7983a-trusted-ca\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639762 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2d22d2d-b77f-491b-956a-c3b36ae92dd1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7bpv5\" (UID: \"a2d22d2d-b77f-491b-956a-c3b36ae92dd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639791 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5x7r\" (UniqueName: \"kubernetes.io/projected/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-kube-api-access-p5x7r\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639828 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b15e4501-1c69-4a71-9ca0-440051530c26-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639853 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbffd\" (UniqueName: \"kubernetes.io/projected/baa36f8a-24fe-4315-b79a-7397d20938ad-kube-api-access-qbffd\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66616d78-82d1-4623-ab0f-ec0b7d44ed76-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.639937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-profile-collector-cert\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.640018 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26cae060-918d-4158-9f68-8d3a47dbd237-images\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.640094 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.140079239 +0000 UTC m=+142.993965808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.640781 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-config\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.641357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-service-ca\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.641400 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90b08ee1-10c5-4aec-a2dc-74edba25f290-certs\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.641431 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-registry-certificates\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642182 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-config\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642390 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kxd\" (UniqueName: \"kubernetes.io/projected/c702dc52-bad4-47e0-a9cf-5091595186a3-kube-api-access-g4kxd\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642506 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpns\" (UniqueName: \"kubernetes.io/projected/90b08ee1-10c5-4aec-a2dc-74edba25f290-kube-api-access-zbpns\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642762 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-config-volume\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642885 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kgsc\" (UniqueName: \"kubernetes.io/projected/961fdd97-d775-4034-91d6-02f865935a59-kube-api-access-5kgsc\") pod \"ingress-canary-lrcf5\" (UID: \"961fdd97-d775-4034-91d6-02f865935a59\") " pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642928 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a450b005-ae46-404b-8ceb-4b7d915a960d-serving-cert\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.642952 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-srv-cert\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643004 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-csi-data-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-kube-api-access-fwlsr\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643059 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-socket-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643091 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8j6z\" (UniqueName: \"kubernetes.io/projected/bb86efdd-32bd-4a38-b343-6231b1d805f9-kube-api-access-x8j6z\") pod \"dns-operator-744455d44c-5ns9w\" (UID: \"bb86efdd-32bd-4a38-b343-6231b1d805f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643135 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb86efdd-32bd-4a38-b343-6231b1d805f9-metrics-tls\") pod \"dns-operator-744455d44c-5ns9w\" (UID: \"bb86efdd-32bd-4a38-b343-6231b1d805f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643162 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-signing-key\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-oauth-config\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643298 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54cc2edc-8881-4459-be91-a4d9536d6b7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643325 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-oauth-serving-cert\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643469 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d37f6dbf-56d4-46b2-8808-31999002461b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643505 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c702dc52-bad4-47e0-a9cf-5091595186a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643523 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-registry-tls\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643544 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7521f47-f937-4c7a-a556-e03df8257499-tmpfs\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643558 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-trusted-ca-bundle\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv5k\" (UniqueName: \"kubernetes.io/projected/b7521f47-f937-4c7a-a556-e03df8257499-kube-api-access-dnv5k\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643615 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7521f47-f937-4c7a-a556-e03df8257499-webhook-cert\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.643635 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-signing-cabundle\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.644153 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d37f6dbf-56d4-46b2-8808-31999002461b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.644162 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-service-ca\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.645086 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26cae060-918d-4158-9f68-8d3a47dbd237-images\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.645334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-oauth-serving-cert\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.647465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-plugins-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.647518 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6599a11f-be2d-418d-9410-4f27a32db1ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.647574 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg688\" (UniqueName: \"kubernetes.io/projected/c8e45f11-204e-4536-bd7b-76c9bdeee4c3-kube-api-access-pg688\") pod \"package-server-manager-789f6589d5-64lqw\" (UID: \"c8e45f11-204e-4536-bd7b-76c9bdeee4c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.647632 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg22m\" (UniqueName: \"kubernetes.io/projected/6599a11f-be2d-418d-9410-4f27a32db1ab-kube-api-access-fg22m\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.647774 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90b08ee1-10c5-4aec-a2dc-74edba25f290-node-bootstrap-token\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.648142 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb5f2299-431e-4001-a9f1-52049e2dce8d-config-volume\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.648181 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d71f46-e21e-46bb-92ce-10335bb7983a-serving-cert\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.670770 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d37f6dbf-56d4-46b2-8808-31999002461b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.671332 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-serving-cert\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.671637 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54cc2edc-8881-4459-be91-a4d9536d6b7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.672093 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-registry-tls\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.672521 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b15e4501-1c69-4a71-9ca0-440051530c26-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.672900 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.676065 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cae060-918d-4158-9f68-8d3a47dbd237-proxy-tls\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.676065 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c702dc52-bad4-47e0-a9cf-5091595186a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.676445 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bb86efdd-32bd-4a38-b343-6231b1d805f9-metrics-tls\") pod \"dns-operator-744455d44c-5ns9w\" (UID: \"bb86efdd-32bd-4a38-b343-6231b1d805f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.676863 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-oauth-config\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.677482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bp2zc" event={"ID":"e71acdff-33d6-4052-907b-8e38ac391f58","Type":"ContainerStarted","Data":"bbdeeab8e11de0854c5260c6747412bcefcab050b56c2447fbf11f47a97eb992"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.677520 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bp2zc" event={"ID":"e71acdff-33d6-4052-907b-8e38ac391f58","Type":"ContainerStarted","Data":"85eb30845c7db18c0f30d35bc270fe296bdbab1da275b6cd4810d28486ca7cf7"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.678104 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.678387 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450b005-ae46-404b-8ceb-4b7d915a960d-config\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.678434 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gt7m\" (UniqueName: \"kubernetes.io/projected/a450b005-ae46-404b-8ceb-4b7d915a960d-kube-api-access-8gt7m\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.678913 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-trusted-ca\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.679184 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15e4501-1c69-4a71-9ca0-440051530c26-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.679221 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbw7\" (UniqueName: \"kubernetes.io/projected/abc32430-2c83-4554-a335-238833ce1a9f-kube-api-access-ckbw7\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.679274 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-secret-volume\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.680361 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15e4501-1c69-4a71-9ca0-440051530c26-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.681315 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-trusted-ca\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.696185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" event={"ID":"138c4519-aea9-40d6-a633-afe9fe199a6d","Type":"ContainerStarted","Data":"94cb17d7b0aefe4b2c18328732cb35b0bec0085e553f5f9d565aa9568a2f6d95"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.696238 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" event={"ID":"138c4519-aea9-40d6-a633-afe9fe199a6d","Type":"ContainerStarted","Data":"eea64576e652fab654ec0b77d67f397aa06219897a2a0be7051fe57f05feb904"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.697259 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.705712 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" event={"ID":"13fd6a2b-6076-4bd6-8bc2-466b802bdde4","Type":"ContainerStarted","Data":"48ff83a52f786c4f462f37656ecfabae10a4a7f2822bec0a18dd98e3b1154782"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.706194 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" event={"ID":"13fd6a2b-6076-4bd6-8bc2-466b802bdde4","Type":"ContainerStarted","Data":"5ac6b613745e05f11a90f3e7cef56be6ad14258f9dc60720d8c61f2790246fe6"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.706209 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" event={"ID":"13fd6a2b-6076-4bd6-8bc2-466b802bdde4","Type":"ContainerStarted","Data":"41aee4666423792d1bae6234979726c83a2d9f315ebc5fe9b69de3f1b946fabc"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.707155 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-bp2zc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.707218 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bp2zc" podUID="e71acdff-33d6-4052-907b-8e38ac391f58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.710689 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7bmfc" event={"ID":"255c8988-162e-4ae1-982f-e45cde006077","Type":"ContainerStarted","Data":"a26ef1ceab9b3c05c88ec85872ddd3339321979755ed8a65f6b269b691a1e218"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.712623 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54cc2edc-8881-4459-be91-a4d9536d6b7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khrtn\" (UID: \"54cc2edc-8881-4459-be91-a4d9536d6b7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.714218 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60c9c1ea-7744-4f77-b407-ff25f9c12f0f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h49wc\" (UID: \"60c9c1ea-7744-4f77-b407-ff25f9c12f0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.714579 4826 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xvkpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.714634 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.719096 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" event={"ID":"69a0b5c3-55a0-4fe5-866d-660e663e5112","Type":"ContainerStarted","Data":"3d6962a47ecc6245e100e63566bb60acd476c527c0efeecdb9319612bf8820f2"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.723810 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgfd\" (UniqueName: \"kubernetes.io/projected/b15e4501-1c69-4a71-9ca0-440051530c26-kube-api-access-2qgfd\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.724947 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" event={"ID":"7d6dffb0-9e73-4f02-ab48-f04555936387","Type":"ContainerStarted","Data":"2abef8ad09ae7b92e1324eb47355927cdc2850ceae617734c057d21c7e10d814"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.725099 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" event={"ID":"7d6dffb0-9e73-4f02-ab48-f04555936387","Type":"ContainerStarted","Data":"d86c3272ddf464a4024254f9fb929582c07b204f22fbcba6f5088921835f14db"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.736513 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" event={"ID":"0ec09e55-a5b7-4aed-af1b-d36fc00592d3","Type":"ContainerStarted","Data":"5587193c0d02a5765a9cf0238f8fb32d90a12ae1de6100d1b4f67b6fcb9f1e1d"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.738230 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" event={"ID":"f2b64f74-7aa6-459c-be6d-f6f8966b456f","Type":"ContainerStarted","Data":"7bce7234f7527332d350d082d70ee9b16eb8430a0fdf1b8d5e890bebc75e0670"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.743118 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-bound-sa-token\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.756322 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" event={"ID":"f8a24898-167c-483a-9d54-7412fb063199","Type":"ContainerStarted","Data":"9f4d67605384b48f50722e0a3d0a519057f33e157fc510fef4f626b620ca440e"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.756378 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" event={"ID":"f8a24898-167c-483a-9d54-7412fb063199","Type":"ContainerStarted","Data":"49bbbfb4b01b2d3002a038a885b7f3b025d9aa3f892994461cdb72eee022dce0"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.756641 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.758485 4826 generic.go:334] "Generic (PLEG): container finished" podID="73f5be70-5ac7-4585-b5a3-e75c5f766822" containerID="c1177f07971c33eddc0ea36be35f7be1eac2dd3877f84c1ac57aa7bb9f9cd56a" exitCode=0 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.758527 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" event={"ID":"73f5be70-5ac7-4585-b5a3-e75c5f766822","Type":"ContainerDied","Data":"c1177f07971c33eddc0ea36be35f7be1eac2dd3877f84c1ac57aa7bb9f9cd56a"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.758553 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" event={"ID":"73f5be70-5ac7-4585-b5a3-e75c5f766822","Type":"ContainerStarted","Data":"e5fec1db5ed77b8734574c4662deea6c10ff5ac6c8d02163e01a4e7e42bf2893"} Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.759869 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lqd\" (UniqueName: \"kubernetes.io/projected/1482e43a-84a4-42ed-a605-37cc519dd5ef-kube-api-access-28lqd\") pod \"console-f9d7485db-hkw8j\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.763708 4826 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cgwgw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.763749 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" podUID="f8a24898-167c-483a-9d54-7412fb063199" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.773548 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b15e4501-1c69-4a71-9ca0-440051530c26-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6tmkk\" (UID: \"b15e4501-1c69-4a71-9ca0-440051530c26\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781584 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781745 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7521f47-f937-4c7a-a556-e03df8257499-tmpfs\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv5k\" (UniqueName: \"kubernetes.io/projected/b7521f47-f937-4c7a-a556-e03df8257499-kube-api-access-dnv5k\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781782 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-signing-cabundle\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781827 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7521f47-f937-4c7a-a556-e03df8257499-webhook-cert\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781847 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-plugins-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.781863 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.28184399 +0000 UTC m=+143.135730349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781892 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6599a11f-be2d-418d-9410-4f27a32db1ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781928 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg688\" (UniqueName: \"kubernetes.io/projected/c8e45f11-204e-4536-bd7b-76c9bdeee4c3-kube-api-access-pg688\") pod \"package-server-manager-789f6589d5-64lqw\" (UID: \"c8e45f11-204e-4536-bd7b-76c9bdeee4c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781946 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg22m\" (UniqueName: \"kubernetes.io/projected/6599a11f-be2d-418d-9410-4f27a32db1ab-kube-api-access-fg22m\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781980 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb5f2299-431e-4001-a9f1-52049e2dce8d-config-volume\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.781997 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d71f46-e21e-46bb-92ce-10335bb7983a-serving-cert\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.782012 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90b08ee1-10c5-4aec-a2dc-74edba25f290-node-bootstrap-token\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.782061 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-plugins-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.782742 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-signing-cabundle\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.782854 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7521f47-f937-4c7a-a556-e03df8257499-tmpfs\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783373 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb5f2299-431e-4001-a9f1-52049e2dce8d-config-volume\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783389 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450b005-ae46-404b-8ceb-4b7d915a960d-config\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gt7m\" (UniqueName: \"kubernetes.io/projected/a450b005-ae46-404b-8ceb-4b7d915a960d-kube-api-access-8gt7m\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783483 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbw7\" (UniqueName: \"kubernetes.io/projected/abc32430-2c83-4554-a335-238833ce1a9f-kube-api-access-ckbw7\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783509 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-secret-volume\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783534 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783558 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xn5w\" (UniqueName: \"kubernetes.io/projected/eb5f2299-431e-4001-a9f1-52049e2dce8d-kube-api-access-8xn5w\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783597 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-registration-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa36f8a-24fe-4315-b79a-7397d20938ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783652 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/961fdd97-d775-4034-91d6-02f865935a59-cert\") pod \"ingress-canary-lrcf5\" (UID: \"961fdd97-d775-4034-91d6-02f865935a59\") " pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783683 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlts2\" (UniqueName: \"kubernetes.io/projected/66616d78-82d1-4623-ab0f-ec0b7d44ed76-kube-api-access-mlts2\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783716 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqqc\" (UniqueName: \"kubernetes.io/projected/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-kube-api-access-9nqqc\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783774 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4l42\" (UniqueName: \"kubernetes.io/projected/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-kube-api-access-k4l42\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783821 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7521f47-f937-4c7a-a556-e03df8257499-apiservice-cert\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783847 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-registration-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783864 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783896 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-srv-cert\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783927 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb5f2299-431e-4001-a9f1-52049e2dce8d-metrics-tls\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.783954 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d71f46-e21e-46bb-92ce-10335bb7983a-config\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784018 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-mountpoint-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784043 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6599a11f-be2d-418d-9410-4f27a32db1ab-proxy-tls\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784085 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z44h\" (UniqueName: \"kubernetes.io/projected/7ace0d3e-9818-4322-b306-f74e7d3fd5a1-kube-api-access-2z44h\") pod \"migrator-59844c95c7-xdl4h\" (UID: \"7ace0d3e-9818-4322-b306-f74e7d3fd5a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784122 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784147 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66616d78-82d1-4623-ab0f-ec0b7d44ed76-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784173 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-kube-api-access-jj4lr\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784204 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e45f11-204e-4536-bd7b-76c9bdeee4c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-64lqw\" (UID: \"c8e45f11-204e-4536-bd7b-76c9bdeee4c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784232 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kj2\" (UniqueName: \"kubernetes.io/projected/a2d22d2d-b77f-491b-956a-c3b36ae92dd1-kube-api-access-94kj2\") pod \"multus-admission-controller-857f4d67dd-7bpv5\" (UID: \"a2d22d2d-b77f-491b-956a-c3b36ae92dd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784263 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450b005-ae46-404b-8ceb-4b7d915a960d-config\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784286 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d35202e5-599d-4dae-b3db-0ae1a99416c2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-94k2w\" (UID: \"d35202e5-599d-4dae-b3db-0ae1a99416c2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784334 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8ql\" (UniqueName: \"kubernetes.io/projected/d35202e5-599d-4dae-b3db-0ae1a99416c2-kube-api-access-6x8ql\") pod \"control-plane-machine-set-operator-78cbb6b69f-94k2w\" (UID: \"d35202e5-599d-4dae-b3db-0ae1a99416c2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa36f8a-24fe-4315-b79a-7397d20938ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784387 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5lc\" (UniqueName: \"kubernetes.io/projected/c4d71f46-e21e-46bb-92ce-10335bb7983a-kube-api-access-mz5lc\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784414 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzwtt\" (UniqueName: \"kubernetes.io/projected/2da6f17d-aeb0-4cc8-8f11-c99bea508129-kube-api-access-lzwtt\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784441 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4d71f46-e21e-46bb-92ce-10335bb7983a-trusted-ca\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784465 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2d22d2d-b77f-491b-956a-c3b36ae92dd1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7bpv5\" (UID: \"a2d22d2d-b77f-491b-956a-c3b36ae92dd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784491 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5x7r\" (UniqueName: \"kubernetes.io/projected/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-kube-api-access-p5x7r\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbffd\" (UniqueName: \"kubernetes.io/projected/baa36f8a-24fe-4315-b79a-7397d20938ad-kube-api-access-qbffd\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.784525 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.284515917 +0000 UTC m=+143.138402276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784554 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66616d78-82d1-4623-ab0f-ec0b7d44ed76-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.784579 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-profile-collector-cert\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.785206 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66616d78-82d1-4623-ab0f-ec0b7d44ed76-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.785456 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.789392 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d71f46-e21e-46bb-92ce-10335bb7983a-config\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.789889 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-mountpoint-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.790055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-profile-collector-cert\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.790117 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90b08ee1-10c5-4aec-a2dc-74edba25f290-certs\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.790447 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90b08ee1-10c5-4aec-a2dc-74edba25f290-node-bootstrap-token\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.790936 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbpns\" (UniqueName: \"kubernetes.io/projected/90b08ee1-10c5-4aec-a2dc-74edba25f290-kube-api-access-zbpns\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791171 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-secret-volume\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791309 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-config-volume\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791364 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kgsc\" (UniqueName: \"kubernetes.io/projected/961fdd97-d775-4034-91d6-02f865935a59-kube-api-access-5kgsc\") pod \"ingress-canary-lrcf5\" (UID: \"961fdd97-d775-4034-91d6-02f865935a59\") " pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791394 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-srv-cert\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a450b005-ae46-404b-8ceb-4b7d915a960d-serving-cert\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-csi-data-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791528 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-socket-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791592 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-signing-key\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791812 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4d71f46-e21e-46bb-92ce-10335bb7983a-trusted-ca\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.791993 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb5f2299-431e-4001-a9f1-52049e2dce8d-metrics-tls\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.792542 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa36f8a-24fe-4315-b79a-7397d20938ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.792777 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.792819 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa36f8a-24fe-4315-b79a-7397d20938ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.793055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90b08ee1-10c5-4aec-a2dc-74edba25f290-certs\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.793165 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7521f47-f937-4c7a-a556-e03df8257499-apiservice-cert\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.793364 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-csi-data-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.795093 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-config-volume\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.795150 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abc32430-2c83-4554-a335-238833ce1a9f-socket-dir\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.795178 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d35202e5-599d-4dae-b3db-0ae1a99416c2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-94k2w\" (UID: \"d35202e5-599d-4dae-b3db-0ae1a99416c2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.800496 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-srv-cert\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.800865 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-srv-cert\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.801372 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d71f46-e21e-46bb-92ce-10335bb7983a-serving-cert\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.801541 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6599a11f-be2d-418d-9410-4f27a32db1ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.801933 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2d22d2d-b77f-491b-956a-c3b36ae92dd1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7bpv5\" (UID: \"a2d22d2d-b77f-491b-956a-c3b36ae92dd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.802240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.804301 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-signing-key\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.804347 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7521f47-f937-4c7a-a556-e03df8257499-webhook-cert\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.804729 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6599a11f-be2d-418d-9410-4f27a32db1ab-proxy-tls\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.807402 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r"] Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.808429 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkncf\" (UniqueName: \"kubernetes.io/projected/26cae060-918d-4158-9f68-8d3a47dbd237-kube-api-access-xkncf\") pod \"machine-config-operator-74547568cd-zfk49\" (UID: \"26cae060-918d-4158-9f68-8d3a47dbd237\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.812132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a450b005-ae46-404b-8ceb-4b7d915a960d-serving-cert\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.812725 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66616d78-82d1-4623-ab0f-ec0b7d44ed76-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.813018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/961fdd97-d775-4034-91d6-02f865935a59-cert\") pod \"ingress-canary-lrcf5\" (UID: \"961fdd97-d775-4034-91d6-02f865935a59\") " pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.817131 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.821139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e45f11-204e-4536-bd7b-76c9bdeee4c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-64lqw\" (UID: \"c8e45f11-204e-4536-bd7b-76c9bdeee4c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.825253 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.831811 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kxd\" (UniqueName: \"kubernetes.io/projected/c702dc52-bad4-47e0-a9cf-5091595186a3-kube-api-access-g4kxd\") pod \"openshift-config-operator-7777fb866f-2w6c4\" (UID: \"c702dc52-bad4-47e0-a9cf-5091595186a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.839600 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-kube-api-access-fwlsr\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.853211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8j6z\" (UniqueName: \"kubernetes.io/projected/bb86efdd-32bd-4a38-b343-6231b1d805f9-kube-api-access-x8j6z\") pod \"dns-operator-744455d44c-5ns9w\" (UID: \"bb86efdd-32bd-4a38-b343-6231b1d805f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.855630 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:30 crc kubenswrapper[4826]: W0131 07:38:30.863566 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8f916b8_cc8c_4ef0_8dad_6b34e15f7d05.slice/crio-8215ec8b83d70ed10cd54de1cb9245cd335c8a7a3cc56f0f08ec12b8166acb18 WatchSource:0}: Error finding container 8215ec8b83d70ed10cd54de1cb9245cd335c8a7a3cc56f0f08ec12b8166acb18: Status 404 returned error can't find the container with id 8215ec8b83d70ed10cd54de1cb9245cd335c8a7a3cc56f0f08ec12b8166acb18 Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.870785 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.877843 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.894740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.895158 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv5k\" (UniqueName: \"kubernetes.io/projected/b7521f47-f937-4c7a-a556-e03df8257499-kube-api-access-dnv5k\") pod \"packageserver-d55dfcdfc-rrdbv\" (UID: \"b7521f47-f937-4c7a-a556-e03df8257499\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: E0131 07:38:30.895941 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.39592001 +0000 UTC m=+143.249806369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.923140 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.936409 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg22m\" (UniqueName: \"kubernetes.io/projected/6599a11f-be2d-418d-9410-4f27a32db1ab-kube-api-access-fg22m\") pod \"machine-config-controller-84d6567774-qsvfr\" (UID: \"6599a11f-be2d-418d-9410-4f27a32db1ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.938431 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg688\" (UniqueName: \"kubernetes.io/projected/c8e45f11-204e-4536-bd7b-76c9bdeee4c3-kube-api-access-pg688\") pod \"package-server-manager-789f6589d5-64lqw\" (UID: \"c8e45f11-204e-4536-bd7b-76c9bdeee4c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.957576 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbw7\" (UniqueName: \"kubernetes.io/projected/abc32430-2c83-4554-a335-238833ce1a9f-kube-api-access-ckbw7\") pod \"csi-hostpathplugin-wl98f\" (UID: \"abc32430-2c83-4554-a335-238833ce1a9f\") " pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.961323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.977159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z44h\" (UniqueName: \"kubernetes.io/projected/7ace0d3e-9818-4322-b306-f74e7d3fd5a1-kube-api-access-2z44h\") pod \"migrator-59844c95c7-xdl4h\" (UID: \"7ace0d3e-9818-4322-b306-f74e7d3fd5a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" Jan 31 07:38:30 crc kubenswrapper[4826]: I0131 07:38:30.978740 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:30.998015 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gt7m\" (UniqueName: \"kubernetes.io/projected/a450b005-ae46-404b-8ceb-4b7d915a960d-kube-api-access-8gt7m\") pod \"service-ca-operator-777779d784-976l6\" (UID: \"a450b005-ae46-404b-8ceb-4b7d915a960d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.000336 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhsnj\" (UniqueName: \"kubernetes.io/projected/bab94136-ad0f-4379-a76a-5acb66335175-kube-api-access-lhsnj\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.000607 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.001086 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.501071342 +0000 UTC m=+143.354957701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.011625 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhsnj\" (UniqueName: \"kubernetes.io/projected/bab94136-ad0f-4379-a76a-5acb66335175-kube-api-access-lhsnj\") pod \"authentication-operator-69f744f599-v9kf9\" (UID: \"bab94136-ad0f-4379-a76a-5acb66335175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.012108 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.014172 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4l42\" (UniqueName: \"kubernetes.io/projected/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-kube-api-access-k4l42\") pod \"collect-profiles-29497410-bfxcj\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.035617 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/4b9bf8ce-2863-4fec-a779-5f4a3841ae3c-kube-api-access-jj4lr\") pod \"service-ca-9c57cc56f-hmc8k\" (UID: \"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.041276 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.054814 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xn5w\" (UniqueName: \"kubernetes.io/projected/eb5f2299-431e-4001-a9f1-52049e2dce8d-kube-api-access-8xn5w\") pod \"dns-default-p6c7j\" (UID: \"eb5f2299-431e-4001-a9f1-52049e2dce8d\") " pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.080555 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlts2\" (UniqueName: \"kubernetes.io/projected/66616d78-82d1-4623-ab0f-ec0b7d44ed76-kube-api-access-mlts2\") pod \"openshift-controller-manager-operator-756b6f6bc6-45lkw\" (UID: \"66616d78-82d1-4623-ab0f-ec0b7d44ed76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.087167 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.100744 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqqc\" (UniqueName: \"kubernetes.io/projected/5c4b5ff7-12e8-44a2-8836-e8b37d34831b-kube-api-access-9nqqc\") pod \"catalog-operator-68c6474976-4mdx8\" (UID: \"5c4b5ff7-12e8-44a2-8836-e8b37d34831b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.102045 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.102139 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.602107885 +0000 UTC m=+143.455994244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.102353 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.102813 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.602804105 +0000 UTC m=+143.456690464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.108538 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.121852 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8ql\" (UniqueName: \"kubernetes.io/projected/d35202e5-599d-4dae-b3db-0ae1a99416c2-kube-api-access-6x8ql\") pod \"control-plane-machine-set-operator-78cbb6b69f-94k2w\" (UID: \"d35202e5-599d-4dae-b3db-0ae1a99416c2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.128043 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.123202 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.135435 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kj2\" (UniqueName: \"kubernetes.io/projected/a2d22d2d-b77f-491b-956a-c3b36ae92dd1-kube-api-access-94kj2\") pod \"multus-admission-controller-857f4d67dd-7bpv5\" (UID: \"a2d22d2d-b77f-491b-956a-c3b36ae92dd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.154839 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.163836 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbpns\" (UniqueName: \"kubernetes.io/projected/90b08ee1-10c5-4aec-a2dc-74edba25f290-kube-api-access-zbpns\") pod \"machine-config-server-8dst8\" (UID: \"90b08ee1-10c5-4aec-a2dc-74edba25f290\") " pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.178578 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5lc\" (UniqueName: \"kubernetes.io/projected/c4d71f46-e21e-46bb-92ce-10335bb7983a-kube-api-access-mz5lc\") pod \"console-operator-58897d9998-xz6vc\" (UID: \"c4d71f46-e21e-46bb-92ce-10335bb7983a\") " pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.193366 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.194542 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzwtt\" (UniqueName: \"kubernetes.io/projected/2da6f17d-aeb0-4cc8-8f11-c99bea508129-kube-api-access-lzwtt\") pod \"marketplace-operator-79b997595-qwd92\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:31 crc kubenswrapper[4826]: W0131 07:38:31.198127 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60c9c1ea_7744_4f77_b407_ff25f9c12f0f.slice/crio-24accaa5aeaedeff2962e9b06563fcdfc3455c6426a21ad2e4ae8401775ae203 WatchSource:0}: Error finding container 24accaa5aeaedeff2962e9b06563fcdfc3455c6426a21ad2e4ae8401775ae203: Status 404 returned error can't find the container with id 24accaa5aeaedeff2962e9b06563fcdfc3455c6426a21ad2e4ae8401775ae203 Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.200296 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.203769 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.204851 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.704833337 +0000 UTC m=+143.558719696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.213032 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.215543 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kgsc\" (UniqueName: \"kubernetes.io/projected/961fdd97-d775-4034-91d6-02f865935a59-kube-api-access-5kgsc\") pod \"ingress-canary-lrcf5\" (UID: \"961fdd97-d775-4034-91d6-02f865935a59\") " pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.236550 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.250121 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.250905 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5x7r\" (UniqueName: \"kubernetes.io/projected/41e5655c-8535-4d1c-be9d-90c18f3fcc8b-kube-api-access-p5x7r\") pod \"olm-operator-6b444d44fb-w4tzn\" (UID: \"41e5655c-8535-4d1c-be9d-90c18f3fcc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.260404 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbffd\" (UniqueName: \"kubernetes.io/projected/baa36f8a-24fe-4315-b79a-7397d20938ad-kube-api-access-qbffd\") pod \"kube-storage-version-migrator-operator-b67b599dd-95pcf\" (UID: \"baa36f8a-24fe-4315-b79a-7397d20938ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.274779 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.288312 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.298571 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.305409 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.305778 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.805761777 +0000 UTC m=+143.659648136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.324217 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.348423 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.371848 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8dst8" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.373815 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lrcf5" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.352602 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.412345 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.412934 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:31.912912467 +0000 UTC m=+143.766798826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.471387 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.514743 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.515161 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.015147064 +0000 UTC m=+143.869033423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.536061 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.576023 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.615878 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.616689 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.116655381 +0000 UTC m=+143.970541740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.718931 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.720728 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.219535978 +0000 UTC m=+144.073422337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.731692 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.746138 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hkw8j"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.770865 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" event={"ID":"60c9c1ea-7744-4f77-b407-ff25f9c12f0f","Type":"ContainerStarted","Data":"24accaa5aeaedeff2962e9b06563fcdfc3455c6426a21ad2e4ae8401775ae203"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.772734 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" event={"ID":"73f5be70-5ac7-4585-b5a3-e75c5f766822","Type":"ContainerStarted","Data":"9661bfe69671dafc2038496825c44553b7a0fdc89d7bb737d82cb856d962b0cd"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.774257 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" event={"ID":"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05","Type":"ContainerStarted","Data":"6f97905b9eb43c49b871aac70514f6a1b6fa2bcf2821c01b156b0714babe6c1c"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.774284 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" event={"ID":"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05","Type":"ContainerStarted","Data":"8215ec8b83d70ed10cd54de1cb9245cd335c8a7a3cc56f0f08ec12b8166acb18"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.775803 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" event={"ID":"84e4d149-e92b-4d41-8fdd-0c831d554a94","Type":"ContainerStarted","Data":"23d0006a09b74dab349403967c6fa6b9b39a9ab7fc5c6d5e6dc62094385d6674"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.775854 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" event={"ID":"84e4d149-e92b-4d41-8fdd-0c831d554a94","Type":"ContainerStarted","Data":"5348ab7361e47cb9ca95d2a4266b3c9c594401ead61a198ba343c0b830c4bf6c"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.779804 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8dst8" event={"ID":"90b08ee1-10c5-4aec-a2dc-74edba25f290","Type":"ContainerStarted","Data":"3535caafef1f171e3bb90910547845df726e766ea231d98bf029522db9150b25"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.781172 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" event={"ID":"4e48db4e-7a7e-4956-8a51-08cac7cfbf7c","Type":"ContainerStarted","Data":"3aeee3882fb2c8437f14afcb011f4ec9e2fbf2d13a9801dc1224d9def29914b3"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.784693 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" event={"ID":"f2b64f74-7aa6-459c-be6d-f6f8966b456f","Type":"ContainerStarted","Data":"5c35bc360ad8d1ef91a0f45c2d982fa615d709caecd8800ae6b008ca9d33cbd4"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.788047 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" event={"ID":"b15e4501-1c69-4a71-9ca0-440051530c26","Type":"ContainerStarted","Data":"ec10bb4861b3a5911c04dd1d1fd2677a18da64782155a23239af32f374362b29"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.789052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7bmfc" event={"ID":"255c8988-162e-4ae1-982f-e45cde006077","Type":"ContainerStarted","Data":"184b1108afb1f43393fb705c4b286f5f1235cb3b652785cbdc39266eb9b58172"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.794512 4826 generic.go:334] "Generic (PLEG): container finished" podID="69a0b5c3-55a0-4fe5-866d-660e663e5112" containerID="53ab724ac867c72091915573b493057d0cb752075adefd1fcc84fb0b47632bdb" exitCode=0 Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.794574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" event={"ID":"69a0b5c3-55a0-4fe5-866d-660e663e5112","Type":"ContainerDied","Data":"53ab724ac867c72091915573b493057d0cb752075adefd1fcc84fb0b47632bdb"} Jan 31 07:38:31 crc kubenswrapper[4826]: W0131 07:38:31.815279 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26cae060_918d_4158_9f68_8d3a47dbd237.slice/crio-a39f328274e5ac05bfcf278f28943a73e752e9262c0d4960486d9c7795abdd20 WatchSource:0}: Error finding container a39f328274e5ac05bfcf278f28943a73e752e9262c0d4960486d9c7795abdd20: Status 404 returned error can't find the container with id a39f328274e5ac05bfcf278f28943a73e752e9262c0d4960486d9c7795abdd20 Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.822068 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" event={"ID":"0ec09e55-a5b7-4aed-af1b-d36fc00592d3","Type":"ContainerStarted","Data":"c4c80888641d6c8df8a619892f853e8bea964290d2b9166f7357db1271643152"} Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.822079 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-bp2zc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.822154 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bp2zc" podUID="e71acdff-33d6-4052-907b-8e38ac391f58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.823212 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.823336 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9x5mr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.823390 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.824126 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.324102043 +0000 UTC m=+144.177988402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.943052 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj"] Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.943843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:31 crc kubenswrapper[4826]: E0131 07:38:31.952658 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.452642051 +0000 UTC m=+144.306528410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:31 crc kubenswrapper[4826]: I0131 07:38:31.990988 4826 csr.go:261] certificate signing request csr-b4vzp is approved, waiting to be issued Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.006163 4826 csr.go:257] certificate signing request csr-b4vzp is issued Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.009757 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.045146 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.045499 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.545477727 +0000 UTC m=+144.399364086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.057888 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v9kf9"] Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.073717 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" podStartSLOduration=123.073693643 podStartE2EDuration="2m3.073693643s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:32.067731471 +0000 UTC m=+143.921617830" watchObservedRunningTime="2026-01-31 07:38:32.073693643 +0000 UTC m=+143.927579992" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.089864 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr"] Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.112274 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv"] Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.150634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.151314 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.651300198 +0000 UTC m=+144.505186557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.213377 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g94hs" podStartSLOduration=123.213350313 podStartE2EDuration="2m3.213350313s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:32.2121973 +0000 UTC m=+144.066083659" watchObservedRunningTime="2026-01-31 07:38:32.213350313 +0000 UTC m=+144.067236672" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.252145 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.252547 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.752526507 +0000 UTC m=+144.606412866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.355664 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.356136 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.856122384 +0000 UTC m=+144.710008743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.368226 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" podStartSLOduration=122.368205843 podStartE2EDuration="2m2.368205843s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:32.367296477 +0000 UTC m=+144.221182836" watchObservedRunningTime="2026-01-31 07:38:32.368205843 +0000 UTC m=+144.222092202" Jan 31 07:38:32 crc kubenswrapper[4826]: W0131 07:38:32.447130 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab94136_ad0f_4379_a76a_5acb66335175.slice/crio-a637ac0eb1447ebd752c94497db60cecfecc9604782117e7405c0584d8b1013a WatchSource:0}: Error finding container a637ac0eb1447ebd752c94497db60cecfecc9604782117e7405c0584d8b1013a: Status 404 returned error can't find the container with id a637ac0eb1447ebd752c94497db60cecfecc9604782117e7405c0584d8b1013a Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.456433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.456820 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:32.956792536 +0000 UTC m=+144.810678895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.538981 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.546497 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:32 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:32 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:32 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.548048 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.557201 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.557548 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.05753479 +0000 UTC m=+144.911421149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.569488 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.665476 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.666363 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.166330447 +0000 UTC m=+145.020216806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.684047 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" podStartSLOduration=123.684029649 podStartE2EDuration="2m3.684029649s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:32.683304458 +0000 UTC m=+144.537190817" watchObservedRunningTime="2026-01-31 07:38:32.684029649 +0000 UTC m=+144.537916008" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.769190 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.769575 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.269562263 +0000 UTC m=+145.123448622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.872847 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:32 crc kubenswrapper[4826]: E0131 07:38:32.873462 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.373443839 +0000 UTC m=+145.227330188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.895578 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" event={"ID":"bab94136-ad0f-4379-a76a-5acb66335175","Type":"ContainerStarted","Data":"a637ac0eb1447ebd752c94497db60cecfecc9604782117e7405c0584d8b1013a"} Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.919128 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" event={"ID":"b7521f47-f937-4c7a-a556-e03df8257499","Type":"ContainerStarted","Data":"badb6980b9df765f80a2ca93bfb9c6ce398591be4a29a15f77c5fdb68554aaa6"} Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.980745 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.985732 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" event={"ID":"54cc2edc-8881-4459-be91-a4d9536d6b7d","Type":"ContainerStarted","Data":"599b97bbbfb4d2a00760286314e8858bf5071e8e3b4f118c8092bdfcfc37fe2d"} Jan 31 07:38:32 crc kubenswrapper[4826]: I0131 07:38:32.985790 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" event={"ID":"54cc2edc-8881-4459-be91-a4d9536d6b7d","Type":"ContainerStarted","Data":"90a349bd7ad199467e7b10a7840960b13e925642f6a88e237fbacc0122f68733"} Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.007290 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.507226039 +0000 UTC m=+145.361112398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.010925 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.016051 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 07:33:31 +0000 UTC, rotation deadline is 2026-12-11 03:17:40.828567708 +0000 UTC Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.016079 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7531h39m7.812490453s for next certificate rotation Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.022852 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xfxnd" podStartSLOduration=124.02282608 podStartE2EDuration="2m4.02282608s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:32.979811076 +0000 UTC m=+144.833697435" watchObservedRunningTime="2026-01-31 07:38:33.02282608 +0000 UTC m=+144.876712439" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.042038 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5ns9w"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.051047 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" event={"ID":"73f5be70-5ac7-4585-b5a3-e75c5f766822","Type":"ContainerStarted","Data":"4161f910de965db9a603f6976405f840a02b96e11ada8d943dede0d3ff183a00"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.075569 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ngf5x" podStartSLOduration=123.075541745 podStartE2EDuration="2m3.075541745s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.059571173 +0000 UTC m=+144.913457522" watchObservedRunningTime="2026-01-31 07:38:33.075541745 +0000 UTC m=+144.929428104" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.076936 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wl98f"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.085473 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.086960 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" event={"ID":"7ace0d3e-9818-4322-b306-f74e7d3fd5a1","Type":"ContainerStarted","Data":"ca0b74e1caf73336fc56fbe8937985167bf28ae70a389ae6292f09c7b25054c2"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.087018 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" event={"ID":"7ace0d3e-9818-4322-b306-f74e7d3fd5a1","Type":"ContainerStarted","Data":"8a7f3cffc4268787c0de438ff1c8293be4b9ba5f4abd12932613acb4b6151832"} Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.087091 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.587048648 +0000 UTC m=+145.440935007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.100088 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.100418 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8dst8" event={"ID":"90b08ee1-10c5-4aec-a2dc-74edba25f290","Type":"ContainerStarted","Data":"dc081ca0232545bac62bb80ee5975d91064544f5b38f49775418d0713e4e4930"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.097523 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bp2zc" podStartSLOduration=124.097505431 podStartE2EDuration="2m4.097505431s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.086230184 +0000 UTC m=+144.940116543" watchObservedRunningTime="2026-01-31 07:38:33.097505431 +0000 UTC m=+144.951391790" Jan 31 07:38:33 crc kubenswrapper[4826]: W0131 07:38:33.102736 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabc32430_2c83_4554_a335_238833ce1a9f.slice/crio-7ea9ee09fc07ed4350208fde7ae3e3486ccdf922f999248b666050e48c0a38b2 WatchSource:0}: Error finding container 7ea9ee09fc07ed4350208fde7ae3e3486ccdf922f999248b666050e48c0a38b2: Status 404 returned error can't find the container with id 7ea9ee09fc07ed4350208fde7ae3e3486ccdf922f999248b666050e48c0a38b2 Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.107144 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" event={"ID":"b15e4501-1c69-4a71-9ca0-440051530c26","Type":"ContainerStarted","Data":"024bb11946f62afe2762e514d2cf689eca93a3b151a408c476760787d5cfa5a0"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.125449 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.187533 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" event={"ID":"26cae060-918d-4158-9f68-8d3a47dbd237","Type":"ContainerStarted","Data":"94bff53360f966c14be5d20413f28196a6f75b2b6bffaa45183b2465f5f76139"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.187636 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" event={"ID":"26cae060-918d-4158-9f68-8d3a47dbd237","Type":"ContainerStarted","Data":"a39f328274e5ac05bfcf278f28943a73e752e9262c0d4960486d9c7795abdd20"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.188790 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.189175 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.689159332 +0000 UTC m=+145.543045691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.210416 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bl2g9" podStartSLOduration=124.210393986 podStartE2EDuration="2m4.210393986s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.208232924 +0000 UTC m=+145.062119283" watchObservedRunningTime="2026-01-31 07:38:33.210393986 +0000 UTC m=+145.064280345" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.226390 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" event={"ID":"69a0b5c3-55a0-4fe5-866d-660e663e5112","Type":"ContainerStarted","Data":"1e79e80f6b149433565e802316082f3e2bbfdbb1f9280da6498a15f5914bd545"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.267271 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" event={"ID":"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0","Type":"ContainerStarted","Data":"ecdaf39887d89c5f9f561e51afb030db89047e044699c33d32aac107a3b4cd64"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.270640 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8vz7r" podStartSLOduration=123.270616069 podStartE2EDuration="2m3.270616069s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.244662838 +0000 UTC m=+145.098549197" watchObservedRunningTime="2026-01-31 07:38:33.270616069 +0000 UTC m=+145.124502428" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.286715 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7bmfc" podStartSLOduration=123.286692654 podStartE2EDuration="2m3.286692654s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.268531918 +0000 UTC m=+145.122418277" watchObservedRunningTime="2026-01-31 07:38:33.286692654 +0000 UTC m=+145.140579013" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.295853 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.296698 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.796682123 +0000 UTC m=+145.650568482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.313603 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hkw8j" event={"ID":"1482e43a-84a4-42ed-a605-37cc519dd5ef","Type":"ContainerStarted","Data":"19d271245fb7791eb591bd8dde736cb425a8da78707a875c94ae05c1f78766aa"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.329300 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" event={"ID":"6599a11f-be2d-418d-9410-4f27a32db1ab","Type":"ContainerStarted","Data":"0ac0f2ccc76dd60bf49b3287a0571f513762890fc0ff653111578e6506f13e67"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.334797 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" podStartSLOduration=124.334776535 podStartE2EDuration="2m4.334776535s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.330427389 +0000 UTC m=+145.184313748" watchObservedRunningTime="2026-01-31 07:38:33.334776535 +0000 UTC m=+145.188662894" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.335182 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lrcf5"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.348298 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" event={"ID":"60c9c1ea-7744-4f77-b407-ff25f9c12f0f","Type":"ContainerStarted","Data":"5480e8a5af0ca79530197a0990f17f383f056dc1ca1408642f0d4ca4dd2a7247"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.364323 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8dst8" podStartSLOduration=5.364305599 podStartE2EDuration="5.364305599s" podCreationTimestamp="2026-01-31 07:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.361821657 +0000 UTC m=+145.215708016" watchObservedRunningTime="2026-01-31 07:38:33.364305599 +0000 UTC m=+145.218191958" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.379292 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf"] Jan 31 07:38:33 crc kubenswrapper[4826]: W0131 07:38:33.388353 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod961fdd97_d775_4034_91d6_02f865935a59.slice/crio-b1a64b26e89c76e9c9f39fefeeb35e61f7ae9d828ae1a973745db256e29f8b0d WatchSource:0}: Error finding container b1a64b26e89c76e9c9f39fefeeb35e61f7ae9d828ae1a973745db256e29f8b0d: Status 404 returned error can't find the container with id b1a64b26e89c76e9c9f39fefeeb35e61f7ae9d828ae1a973745db256e29f8b0d Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.390716 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.401818 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.402209 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:33.902195935 +0000 UTC m=+145.756082294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.425729 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tmkk" podStartSLOduration=123.425707055 podStartE2EDuration="2m3.425707055s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.41929659 +0000 UTC m=+145.273182949" watchObservedRunningTime="2026-01-31 07:38:33.425707055 +0000 UTC m=+145.279593414" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.430162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" event={"ID":"e8f916b8-cc8c-4ef0-8dad-6b34e15f7d05","Type":"ContainerStarted","Data":"6a5360e61cbb01e1468fd5f47aeac08d2e7ab3ae9ab4a9acda89f062f2925586"} Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.459065 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwd92"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.461799 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.486823 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p6c7j"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.488849 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w9vbd" podStartSLOduration=124.488837182 podStartE2EDuration="2m4.488837182s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.485789033 +0000 UTC m=+145.339675382" watchObservedRunningTime="2026-01-31 07:38:33.488837182 +0000 UTC m=+145.342723541" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.505800 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.507145 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.007124111 +0000 UTC m=+145.861010470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.507404 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.529321 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xz6vc"] Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.538882 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.038861509 +0000 UTC m=+145.892747868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.567468 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hmc8k"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.588239 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:33 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:33 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:33 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.588318 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.621666 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.621709 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-976l6"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.622152 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.623063 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.123028004 +0000 UTC m=+145.976914363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.723831 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.724364 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.224350565 +0000 UTC m=+146.078236924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.741030 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7bpv5"] Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.741696 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khrtn" podStartSLOduration=123.741676446 podStartE2EDuration="2m3.741676446s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.729422732 +0000 UTC m=+145.583309101" watchObservedRunningTime="2026-01-31 07:38:33.741676446 +0000 UTC m=+145.595562805" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.815575 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" podStartSLOduration=123.815553613 podStartE2EDuration="2m3.815553613s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.780553091 +0000 UTC m=+145.634439450" watchObservedRunningTime="2026-01-31 07:38:33.815553613 +0000 UTC m=+145.669439972" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.816863 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nk42r" podStartSLOduration=123.816855211 podStartE2EDuration="2m3.816855211s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.816410398 +0000 UTC m=+145.670296757" watchObservedRunningTime="2026-01-31 07:38:33.816855211 +0000 UTC m=+145.670741570" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.827675 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.828065 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.328049735 +0000 UTC m=+146.181936094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.861185 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hkw8j" podStartSLOduration=124.86116281299999 podStartE2EDuration="2m4.861162813s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.857508207 +0000 UTC m=+145.711394566" watchObservedRunningTime="2026-01-31 07:38:33.861162813 +0000 UTC m=+145.715049172" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.888170 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h49wc" podStartSLOduration=123.888145473 podStartE2EDuration="2m3.888145473s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.886236428 +0000 UTC m=+145.740122777" watchObservedRunningTime="2026-01-31 07:38:33.888145473 +0000 UTC m=+145.742031832" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.912918 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" podStartSLOduration=123.912897519 podStartE2EDuration="2m3.912897519s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:33.909838581 +0000 UTC m=+145.763724940" watchObservedRunningTime="2026-01-31 07:38:33.912897519 +0000 UTC m=+145.766783878" Jan 31 07:38:33 crc kubenswrapper[4826]: I0131 07:38:33.931638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:33 crc kubenswrapper[4826]: E0131 07:38:33.932078 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.432065364 +0000 UTC m=+146.285951723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.033587 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.036505 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.536470374 +0000 UTC m=+146.390356733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.138054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.138437 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.638419614 +0000 UTC m=+146.492305973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.239339 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.239729 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.739687023 +0000 UTC m=+146.593573392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.239931 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.240369 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.740353813 +0000 UTC m=+146.594240172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.345559 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.346091 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.846067341 +0000 UTC m=+146.699953700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.403490 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.405408 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.446757 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.447080 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:34.947068373 +0000 UTC m=+146.800954722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.452427 4826 generic.go:334] "Generic (PLEG): container finished" podID="c702dc52-bad4-47e0-a9cf-5091595186a3" containerID="c1321c2a04125ba177f878e60b7eb3b884cec05a63eb7a95f99bd93aef5f8e3f" exitCode=0 Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.452511 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" event={"ID":"c702dc52-bad4-47e0-a9cf-5091595186a3","Type":"ContainerDied","Data":"c1321c2a04125ba177f878e60b7eb3b884cec05a63eb7a95f99bd93aef5f8e3f"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.452541 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" event={"ID":"c702dc52-bad4-47e0-a9cf-5091595186a3","Type":"ContainerStarted","Data":"c8151feddfc3af3256b2034b8c193dc48d3e1051cd60b7ad7befca0c458afdd4"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.464813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lrcf5" event={"ID":"961fdd97-d775-4034-91d6-02f865935a59","Type":"ContainerStarted","Data":"590ae25514e009361021c4d8826a62d6e287d7726728accd24de625aece9a06c"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.464863 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lrcf5" event={"ID":"961fdd97-d775-4034-91d6-02f865935a59","Type":"ContainerStarted","Data":"b1a64b26e89c76e9c9f39fefeeb35e61f7ae9d828ae1a973745db256e29f8b0d"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.489539 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" event={"ID":"c8e45f11-204e-4536-bd7b-76c9bdeee4c3","Type":"ContainerStarted","Data":"a365c6b15e52508972a89ec0e7b2c6d49e566761558a52a885eb2b80d0d93ce6"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.489632 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" event={"ID":"c8e45f11-204e-4536-bd7b-76c9bdeee4c3","Type":"ContainerStarted","Data":"64fc6a4b1409a1461d2a51eae45fd20dde055f87d91dec2992b9970c26eeb558"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.503113 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hkw8j" event={"ID":"1482e43a-84a4-42ed-a605-37cc519dd5ef","Type":"ContainerStarted","Data":"82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.507140 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" event={"ID":"bb86efdd-32bd-4a38-b343-6231b1d805f9","Type":"ContainerStarted","Data":"1f662839a2a16e7cec8d52e0d7c0193d417897f64fc474c89cadb8a3d8f97628"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.507187 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" event={"ID":"bb86efdd-32bd-4a38-b343-6231b1d805f9","Type":"ContainerStarted","Data":"241a4b909fd10957dbde77b27320359c49823ebde81e0258533c1d3049576279"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.521956 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" event={"ID":"66616d78-82d1-4623-ab0f-ec0b7d44ed76","Type":"ContainerStarted","Data":"12e4bbeb3f5bf44e7ffea2189c2af3a2002a589f39ec3be23309868076a0e4ee"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.522034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" event={"ID":"66616d78-82d1-4623-ab0f-ec0b7d44ed76","Type":"ContainerStarted","Data":"f952eba01f0ea4fbec77d25066cdfccd9f1e89dd6701b6ddedb8871ac6c2505d"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.527569 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lrcf5" podStartSLOduration=6.527548241 podStartE2EDuration="6.527548241s" podCreationTimestamp="2026-01-31 07:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.525366678 +0000 UTC m=+146.379253037" watchObservedRunningTime="2026-01-31 07:38:34.527548241 +0000 UTC m=+146.381434600" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.561790 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:34 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:34 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:34 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.561883 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.565050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.566695 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.066671523 +0000 UTC m=+146.920557882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.581713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" event={"ID":"26cae060-918d-4158-9f68-8d3a47dbd237","Type":"ContainerStarted","Data":"b0f09ecbc5fe7dc801606a5dabe1c87fdf551de358ede92b7f0bd5d074bb336f"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.596490 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-45lkw" podStartSLOduration=125.596461325 podStartE2EDuration="2m5.596461325s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.590341298 +0000 UTC m=+146.444227657" watchObservedRunningTime="2026-01-31 07:38:34.596461325 +0000 UTC m=+146.450347694" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.612509 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" event={"ID":"baa36f8a-24fe-4315-b79a-7397d20938ad","Type":"ContainerStarted","Data":"3a50bf036d2629fbacc386e39520c162a807e33e266e132709fd320634e9309e"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.612731 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" event={"ID":"baa36f8a-24fe-4315-b79a-7397d20938ad","Type":"ContainerStarted","Data":"5b2e56fecb1cbe0f80ca6542d996df090207fa30b147b87681834d5733aa0b96"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.624346 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfk49" podStartSLOduration=124.624328891 podStartE2EDuration="2m4.624328891s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.622704104 +0000 UTC m=+146.476590463" watchObservedRunningTime="2026-01-31 07:38:34.624328891 +0000 UTC m=+146.478215250" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.644454 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" event={"ID":"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0","Type":"ContainerStarted","Data":"e32d367b2e18eb7158f4602f99ef56a2b3ae65848df185a77fc1a15524cb09d8"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.661707 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-95pcf" podStartSLOduration=124.661687162 podStartE2EDuration="2m4.661687162s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.660215329 +0000 UTC m=+146.514101688" watchObservedRunningTime="2026-01-31 07:38:34.661687162 +0000 UTC m=+146.515573521" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.674669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" event={"ID":"2da6f17d-aeb0-4cc8-8f11-c99bea508129","Type":"ContainerStarted","Data":"d68cc1c42639c30c4279d193abbadb55121e217532812268bd970883a9451e61"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.674724 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" event={"ID":"2da6f17d-aeb0-4cc8-8f11-c99bea508129","Type":"ContainerStarted","Data":"3544b17084b415020b550be7c0586501979e549f5bee8dafb6e7422c8187c769"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.675798 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.676986 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.679226 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.179209479 +0000 UTC m=+147.033095838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.682120 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.682390 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.687171 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qwd92 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.687246 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.720859 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" event={"ID":"bab94136-ad0f-4379-a76a-5acb66335175","Type":"ContainerStarted","Data":"13175354c8748014f6f0d0f05ff3c6811d85dee9955676a385b83fd6e31e3e75"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.748560 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" podStartSLOduration=124.748537164 podStartE2EDuration="2m4.748537164s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.72039644 +0000 UTC m=+146.574282809" watchObservedRunningTime="2026-01-31 07:38:34.748537164 +0000 UTC m=+146.602423523" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.752656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" event={"ID":"41e5655c-8535-4d1c-be9d-90c18f3fcc8b","Type":"ContainerStarted","Data":"a5d2aad324bdaaa616913f26d8c08369f76b347164a4d707cfec19a5eb43bb80"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.752713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" event={"ID":"41e5655c-8535-4d1c-be9d-90c18f3fcc8b","Type":"ContainerStarted","Data":"8d972b8820a2aaf77e020ce31a69fa4c11d3c2cc8d289c2a4af1f244143cc2dd"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.753935 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.777381 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" event={"ID":"b7521f47-f937-4c7a-a556-e03df8257499","Type":"ContainerStarted","Data":"bf447cb5956bcbf67aa371a48344346748ff84325ce60c5e529792260edac0b1"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.778088 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.778324 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.779241 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.279226212 +0000 UTC m=+147.133112571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.781454 4826 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-w4tzn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.781508 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" podUID="41e5655c-8535-4d1c-be9d-90c18f3fcc8b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.783093 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" event={"ID":"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c","Type":"ContainerStarted","Data":"e7c4b9d045093c1a4d58ee91fcdd8018720437aa6a3d024be81d5ffe2a4ef4d4"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.783129 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" event={"ID":"4b9bf8ce-2863-4fec-a779-5f4a3841ae3c","Type":"ContainerStarted","Data":"07df295c51fa73be100dab65080fa611b61b2a8393547777e7d832081df5ef6c"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.800084 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" podStartSLOduration=124.800060355 podStartE2EDuration="2m4.800060355s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.798838169 +0000 UTC m=+146.652724528" watchObservedRunningTime="2026-01-31 07:38:34.800060355 +0000 UTC m=+146.653946714" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.801359 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-v9kf9" podStartSLOduration=125.801352352 podStartE2EDuration="2m5.801352352s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.74872753 +0000 UTC m=+146.602613889" watchObservedRunningTime="2026-01-31 07:38:34.801352352 +0000 UTC m=+146.655238711" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.881868 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.883278 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.383259142 +0000 UTC m=+147.237145731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.887867 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" podStartSLOduration=124.887848945 podStartE2EDuration="2m4.887848945s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.859223446 +0000 UTC m=+146.713109805" watchObservedRunningTime="2026-01-31 07:38:34.887848945 +0000 UTC m=+146.741735304" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.888222 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hmc8k" podStartSLOduration=124.888218285 podStartE2EDuration="2m4.888218285s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.887201566 +0000 UTC m=+146.741087925" watchObservedRunningTime="2026-01-31 07:38:34.888218285 +0000 UTC m=+146.742104644" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.900819 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.900856 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" event={"ID":"5c4b5ff7-12e8-44a2-8836-e8b37d34831b","Type":"ContainerStarted","Data":"0800066261b22d3f2eab55ca6ffad40575876398594eb0aabf857710b6ec535b"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.900877 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" event={"ID":"5c4b5ff7-12e8-44a2-8836-e8b37d34831b","Type":"ContainerStarted","Data":"cc33cc1dcc8ab893f72d83c748b0198840627e3bc01146bd822f647493f63dfe"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.914418 4826 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4mdx8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.914491 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" podUID="5c4b5ff7-12e8-44a2-8836-e8b37d34831b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.922936 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p6c7j" event={"ID":"eb5f2299-431e-4001-a9f1-52049e2dce8d","Type":"ContainerStarted","Data":"340fc74e5fe9777dbcc2f74defcbba19e0bdd46d68e0c3f17ab4bd44a52ffcdf"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.924457 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" event={"ID":"a450b005-ae46-404b-8ceb-4b7d915a960d","Type":"ContainerStarted","Data":"b00d163e5821f83f5e199273d4265ea2888d83c29e7741d687bf2a7a1b5352b5"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.924490 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" event={"ID":"a450b005-ae46-404b-8ceb-4b7d915a960d","Type":"ContainerStarted","Data":"405edfac8c5823a508b53427409cc615f523ca2b6893745610aef5fb954ac5c4"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.937040 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" event={"ID":"6599a11f-be2d-418d-9410-4f27a32db1ab","Type":"ContainerStarted","Data":"080126436e60271d05e51b4f47bb2eb776eb181417e5dfc8932558f1905e19bf"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.937089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" event={"ID":"6599a11f-be2d-418d-9410-4f27a32db1ab","Type":"ContainerStarted","Data":"1d8848c760a982a4cbdceb64c1644aaf09f2bf524916f3198192b4568700e084"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.975006 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" event={"ID":"d35202e5-599d-4dae-b3db-0ae1a99416c2","Type":"ContainerStarted","Data":"21725a6eca8bffd5c7cdde48273389c8829552c7db2a043ad012b401f73b4ba7"} Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.982551 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" podStartSLOduration=124.982531864 podStartE2EDuration="2m4.982531864s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.936552383 +0000 UTC m=+146.790438732" watchObservedRunningTime="2026-01-31 07:38:34.982531864 +0000 UTC m=+146.836418223" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.982651 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976l6" podStartSLOduration=124.982647517 podStartE2EDuration="2m4.982647517s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:34.981335339 +0000 UTC m=+146.835221688" watchObservedRunningTime="2026-01-31 07:38:34.982647517 +0000 UTC m=+146.836533876" Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.983430 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:34 crc kubenswrapper[4826]: E0131 07:38:34.985202 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.48517751 +0000 UTC m=+147.339063869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:34 crc kubenswrapper[4826]: I0131 07:38:34.993323 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" event={"ID":"7ace0d3e-9818-4322-b306-f74e7d3fd5a1","Type":"ContainerStarted","Data":"a2521f2a7b4d05d74f1abee569a2675dbf1582a9cc43132e821c5ed38fc1f50d"} Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.016456 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" event={"ID":"abc32430-2c83-4554-a335-238833ce1a9f","Type":"ContainerStarted","Data":"7ea9ee09fc07ed4350208fde7ae3e3486ccdf922f999248b666050e48c0a38b2"} Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.061144 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" podStartSLOduration=125.061124227 podStartE2EDuration="2m5.061124227s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:35.058349917 +0000 UTC m=+146.912236276" watchObservedRunningTime="2026-01-31 07:38:35.061124227 +0000 UTC m=+146.915010586" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.061514 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" event={"ID":"a2d22d2d-b77f-491b-956a-c3b36ae92dd1","Type":"ContainerStarted","Data":"7abbec99928bcee1152f775c2a49e037264156b0f78fbab2e6e7431090e5bb4b"} Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.072853 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" event={"ID":"c4d71f46-e21e-46bb-92ce-10335bb7983a","Type":"ContainerStarted","Data":"3461913d03609fc1536fdbb0b13f8438eaad6d833d03e8b1497e7472400bba73"} Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.072912 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.072925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" event={"ID":"c4d71f46-e21e-46bb-92ce-10335bb7983a","Type":"ContainerStarted","Data":"3c0b10651a071bcb1dbcd7c74a0e00de7beeb40d5ea02599d4375439cdac1003"} Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.095430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.109318 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.609297221 +0000 UTC m=+147.463183800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.113179 4826 patch_prober.go:28] interesting pod/console-operator-58897d9998-xz6vc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/readyz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.113242 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" podUID="c4d71f46-e21e-46bb-92ce-10335bb7983a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/readyz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.124723 4826 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fbprj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 07:38:35 crc kubenswrapper[4826]: [+]log ok Jan 31 07:38:35 crc kubenswrapper[4826]: [+]etcd ok Jan 31 07:38:35 crc kubenswrapper[4826]: [-]poststarthook/start-apiserver-admission-initializer failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/max-in-flight-filter ok Jan 31 07:38:35 crc kubenswrapper[4826]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 07:38:35 crc kubenswrapper[4826]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/openshift.io-startinformers ok Jan 31 07:38:35 crc kubenswrapper[4826]: [-]poststarthook/openshift.io-restmapperupdater failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 07:38:35 crc kubenswrapper[4826]: livez check failed Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.125132 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" podUID="73f5be70-5ac7-4585-b5a3-e75c5f766822" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.128566 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsvfr" podStartSLOduration=125.128549088 podStartE2EDuration="2m5.128549088s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:35.123063129 +0000 UTC m=+146.976949488" watchObservedRunningTime="2026-01-31 07:38:35.128549088 +0000 UTC m=+146.982435447" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.209275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.210939 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.71090719 +0000 UTC m=+147.564793549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.295148 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" podStartSLOduration=126.295114137 podStartE2EDuration="2m6.295114137s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:35.294647143 +0000 UTC m=+147.148533502" watchObservedRunningTime="2026-01-31 07:38:35.295114137 +0000 UTC m=+147.149000496" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.296666 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xdl4h" podStartSLOduration=125.296661191 podStartE2EDuration="2m5.296661191s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:35.229471568 +0000 UTC m=+147.083357927" watchObservedRunningTime="2026-01-31 07:38:35.296661191 +0000 UTC m=+147.150547550" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.314781 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.315112 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.815096575 +0000 UTC m=+147.668982934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.416373 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.416567 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.916532839 +0000 UTC m=+147.770419208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.416860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.417287 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:35.917272491 +0000 UTC m=+147.771158850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.517565 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.517927 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.017904822 +0000 UTC m=+147.871791181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.542458 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:35 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:35 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.542515 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.546348 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.619551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.620032 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.120005255 +0000 UTC m=+147.973891604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.720413 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.720604 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.220576495 +0000 UTC m=+148.074462854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.720691 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.721089 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.221073909 +0000 UTC m=+148.074960458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.778558 4826 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rrdbv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.778629 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" podUID="b7521f47-f937-4c7a-a556-e03df8257499" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.821640 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.821863 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.321830374 +0000 UTC m=+148.175716833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.821986 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.822330 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.322313768 +0000 UTC m=+148.176200127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.922512 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.922646 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.42261657 +0000 UTC m=+148.276502929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:35 crc kubenswrapper[4826]: I0131 07:38:35.923054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:35 crc kubenswrapper[4826]: E0131 07:38:35.923417 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.423406883 +0000 UTC m=+148.277293242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.023641 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.023874 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.523842588 +0000 UTC m=+148.377728947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.023955 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.024355 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.524336423 +0000 UTC m=+148.378222782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.071385 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" event={"ID":"a2d22d2d-b77f-491b-956a-c3b36ae92dd1","Type":"ContainerStarted","Data":"506a2480566aba54f5447df48967bee0fd5d30fce3c54069f162208449dd8823"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.071433 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" event={"ID":"a2d22d2d-b77f-491b-956a-c3b36ae92dd1","Type":"ContainerStarted","Data":"aa814f0210633cedc2e0024ee5caae5b597d9e54005d0b57775341b0d61ed8de"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.073268 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" event={"ID":"c702dc52-bad4-47e0-a9cf-5091595186a3","Type":"ContainerStarted","Data":"420af8e2d0dfe139a071d79b60544f89b9b6d47e0cf6f8faa36d3bebf35533e9"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.073361 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.074889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p6c7j" event={"ID":"eb5f2299-431e-4001-a9f1-52049e2dce8d","Type":"ContainerStarted","Data":"1cefe7638a09f8e1787aabd47eff5f503f2e0496b77ec6a19d9c50c763d6d44b"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.074931 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p6c7j" event={"ID":"eb5f2299-431e-4001-a9f1-52049e2dce8d","Type":"ContainerStarted","Data":"e900bed91b6b13241276c2077152f5aba7fa37fd22749f27485d64308775b44f"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.075030 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.076117 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94k2w" event={"ID":"d35202e5-599d-4dae-b3db-0ae1a99416c2","Type":"ContainerStarted","Data":"f9ca217bd24b795c457674e9277e974c631636f6e2fee8e10d84beacf1d6f462"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.078010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" event={"ID":"c8e45f11-204e-4536-bd7b-76c9bdeee4c3","Type":"ContainerStarted","Data":"266d4ed4e24125111c086ea33b36ca31a5e78ee14acf76c6f0ef49d6a6144838"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.078483 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.080177 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" event={"ID":"bb86efdd-32bd-4a38-b343-6231b1d805f9","Type":"ContainerStarted","Data":"5b473a48af49c5d09c885ba5decde089b61aa5fa28a3aac15cbde1e6a8f9b9c1"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.081717 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" event={"ID":"abc32430-2c83-4554-a335-238833ce1a9f","Type":"ContainerStarted","Data":"b031e24cadf143f8a78cc6959c7edb8946b8b98af9b8ede7e98808675061b148"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.082892 4826 generic.go:334] "Generic (PLEG): container finished" podID="125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" containerID="e32d367b2e18eb7158f4602f99ef56a2b3ae65848df185a77fc1a15524cb09d8" exitCode=0 Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.083005 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" event={"ID":"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0","Type":"ContainerDied","Data":"e32d367b2e18eb7158f4602f99ef56a2b3ae65848df185a77fc1a15524cb09d8"} Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.085045 4826 patch_prober.go:28] interesting pod/console-operator-58897d9998-xz6vc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/readyz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.085084 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" podUID="c4d71f46-e21e-46bb-92ce-10335bb7983a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/readyz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.085254 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qwd92 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.085307 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.088822 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4mdx8" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.098450 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9qt4f" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.118936 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpv5" podStartSLOduration=126.118914459 podStartE2EDuration="2m6.118914459s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:36.095884473 +0000 UTC m=+147.949770832" watchObservedRunningTime="2026-01-31 07:38:36.118914459 +0000 UTC m=+147.972800808" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.124605 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.124777 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.624747657 +0000 UTC m=+148.478634016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.125378 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.125812 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.625792708 +0000 UTC m=+148.479679067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.133851 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4tzn" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.142924 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5ns9w" podStartSLOduration=127.142901593 podStartE2EDuration="2m7.142901593s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:36.136707273 +0000 UTC m=+147.990593652" watchObservedRunningTime="2026-01-31 07:38:36.142901593 +0000 UTC m=+147.996787952" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.143478 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" podStartSLOduration=127.143469969 podStartE2EDuration="2m7.143469969s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:36.117478877 +0000 UTC m=+147.971365256" watchObservedRunningTime="2026-01-31 07:38:36.143469969 +0000 UTC m=+147.997356328" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.197035 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" podStartSLOduration=126.197012697 podStartE2EDuration="2m6.197012697s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:36.195887745 +0000 UTC m=+148.049774114" watchObservedRunningTime="2026-01-31 07:38:36.197012697 +0000 UTC m=+148.050899056" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.229024 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.229737 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.729719633 +0000 UTC m=+148.583605992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.256463 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-p6c7j" podStartSLOduration=8.256445386 podStartE2EDuration="8.256445386s" podCreationTimestamp="2026-01-31 07:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:36.223185374 +0000 UTC m=+148.077071753" watchObservedRunningTime="2026-01-31 07:38:36.256445386 +0000 UTC m=+148.110331745" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.330860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.331303 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.831283922 +0000 UTC m=+148.685170291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.366833 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rrdbv" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.431655 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.432197 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:36.93217338 +0000 UTC m=+148.786059739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.533474 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.533906 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.033885823 +0000 UTC m=+148.887772182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.536353 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:36 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:36 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:36 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.536430 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.634509 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.634661 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.134625617 +0000 UTC m=+148.988511976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.634770 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.635231 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.135198424 +0000 UTC m=+148.989084963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.735359 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.735601 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.235545657 +0000 UTC m=+149.089432016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.836533 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.836996 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.336976471 +0000 UTC m=+149.190862830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.937650 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.937842 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.437813238 +0000 UTC m=+149.291699587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.937958 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.938025 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.938060 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.938108 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.938144 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:36 crc kubenswrapper[4826]: E0131 07:38:36.938475 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.438462067 +0000 UTC m=+149.292348426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.943192 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.943951 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.944328 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:36 crc kubenswrapper[4826]: I0131 07:38:36.945459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.039393 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.039589 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.539558492 +0000 UTC m=+149.393444851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.039845 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.040339 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.540320964 +0000 UTC m=+149.394207523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.094512 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qwd92 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.094568 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.122256 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.126631 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2pp4"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.127852 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.128657 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.139357 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.139691 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2pp4"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.142091 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.142265 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfjcp\" (UniqueName: \"kubernetes.io/projected/04028e85-fcfa-4463-8279-00c5018bde40-kube-api-access-zfjcp\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.142544 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-utilities\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.143392 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-catalog-content\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.144331 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.644314952 +0000 UTC m=+149.498201311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.232272 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.244151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.244203 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-utilities\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.244251 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-catalog-content\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.244288 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfjcp\" (UniqueName: \"kubernetes.io/projected/04028e85-fcfa-4463-8279-00c5018bde40-kube-api-access-zfjcp\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.245163 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.745145999 +0000 UTC m=+149.599032368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.245622 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-utilities\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.245689 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-catalog-content\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.284260 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfjcp\" (UniqueName: \"kubernetes.io/projected/04028e85-fcfa-4463-8279-00c5018bde40-kube-api-access-zfjcp\") pod \"community-operators-w2pp4\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.304978 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snchp"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.305839 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.311924 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.313709 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snchp"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.346486 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.346803 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.84678533 +0000 UTC m=+149.700671689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.451087 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-utilities\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.451147 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.451202 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-catalog-content\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.451227 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btnw\" (UniqueName: \"kubernetes.io/projected/3b28978e-d7a9-41b2-998a-e4a3cd62e236-kube-api-access-5btnw\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.451529 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:37.951515319 +0000 UTC m=+149.805401678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.458348 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.492110 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-czvwd"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.497068 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.518733 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czvwd"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.543890 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:37 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:37 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:37 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.544068 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.553976 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.554076 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-catalog-content\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.554127 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-catalog-content\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.554145 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-utilities\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.554166 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btnw\" (UniqueName: \"kubernetes.io/projected/3b28978e-d7a9-41b2-998a-e4a3cd62e236-kube-api-access-5btnw\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.554192 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-utilities\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.554216 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52hz\" (UniqueName: \"kubernetes.io/projected/0108e13b-6622-4b3c-a0b3-7e91572001aa-kube-api-access-b52hz\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.554311 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:38.054296733 +0000 UTC m=+149.908183092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.555062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-catalog-content\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.555291 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-utilities\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.556478 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.614855 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btnw\" (UniqueName: \"kubernetes.io/projected/3b28978e-d7a9-41b2-998a-e4a3cd62e236-kube-api-access-5btnw\") pod \"certified-operators-snchp\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.624093 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.663553 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-secret-volume\") pod \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.663619 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-config-volume\") pod \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.663771 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4l42\" (UniqueName: \"kubernetes.io/projected/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-kube-api-access-k4l42\") pod \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\" (UID: \"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0\") " Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.665373 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.665680 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" containerName="collect-profiles" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.665702 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" containerName="collect-profiles" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.665799 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" containerName="collect-profiles" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.665872 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52hz\" (UniqueName: \"kubernetes.io/projected/0108e13b-6622-4b3c-a0b3-7e91572001aa-kube-api-access-b52hz\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.666088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.666225 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.664394 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-config-volume" (OuterVolumeSpecName: "config-volume") pod "125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" (UID: "125e0e2a-6c4a-487f-ab4e-fb439ba80bc0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.667267 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" (UID: "125e0e2a-6c4a-487f-ab4e-fb439ba80bc0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.669865 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.670230 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.673602 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-catalog-content\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.673750 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-utilities\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.673868 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.673884 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.674073 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-kube-api-access-k4l42" (OuterVolumeSpecName: "kube-api-access-k4l42") pod "125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" (UID: "125e0e2a-6c4a-487f-ab4e-fb439ba80bc0"). InnerVolumeSpecName "kube-api-access-k4l42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.674226 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:38.174207172 +0000 UTC m=+150.028093531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.674369 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-utilities\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.674449 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-catalog-content\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.675487 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.696141 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2zbk2"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.697569 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.710274 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52hz\" (UniqueName: \"kubernetes.io/projected/0108e13b-6622-4b3c-a0b3-7e91572001aa-kube-api-access-b52hz\") pod \"community-operators-czvwd\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.711393 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zbk2"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.725940 4826 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.774716 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.774899 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.775046 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-catalog-content\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.775070 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.775090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-utilities\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.775127 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2mz\" (UniqueName: \"kubernetes.io/projected/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-kube-api-access-8f2mz\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.775184 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4l42\" (UniqueName: \"kubernetes.io/projected/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0-kube-api-access-k4l42\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.775311 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:38.275292726 +0000 UTC m=+150.129179075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.819818 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.870148 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2pp4"] Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878366 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878438 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-catalog-content\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-utilities\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878503 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2mz\" (UniqueName: \"kubernetes.io/projected/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-kube-api-access-8f2mz\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878557 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.878629 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.878900 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:38.378887823 +0000 UTC m=+150.232774182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.879261 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-catalog-content\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.879873 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-utilities\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.903266 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2mz\" (UniqueName: \"kubernetes.io/projected/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-kube-api-access-8f2mz\") pod \"certified-operators-2zbk2\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.907395 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.974273 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2w6c4" Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.982645 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.982833 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 07:38:38.482806049 +0000 UTC m=+150.336692408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.982918 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:37 crc kubenswrapper[4826]: E0131 07:38:37.983469 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 07:38:38.483441618 +0000 UTC m=+150.337327977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmkrm" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 07:38:37 crc kubenswrapper[4826]: I0131 07:38:37.985208 4826 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T07:38:37.725958399Z","Handler":null,"Name":""} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:37.994888 4826 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:37.994919 4826 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:37.996540 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.020248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.083788 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.093264 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.111470 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" event={"ID":"abc32430-2c83-4554-a335-238833ce1a9f","Type":"ContainerStarted","Data":"eba381734a3e6e2d9dfc497d90ec51047f310c20202958bdd695504b8b7748cf"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.111515 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" event={"ID":"abc32430-2c83-4554-a335-238833ce1a9f","Type":"ContainerStarted","Data":"81b55d611056b328fba7a2eac4bd1a54b479191eee1825524862817ea5372095"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.112902 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" event={"ID":"125e0e2a-6c4a-487f-ab4e-fb439ba80bc0","Type":"ContainerDied","Data":"ecdaf39887d89c5f9f561e51afb030db89047e044699c33d32aac107a3b4cd64"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.112922 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecdaf39887d89c5f9f561e51afb030db89047e044699c33d32aac107a3b4cd64" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.112996 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.135851 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"997bb6a14d114a25f1f2499a5b242b9a870b534ecc5ce3850754d694aad117ac"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.135889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d4a5c62122227c93534bc49f6856bf0e19f2f54e03379b4a2e70df956908009e"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.139280 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"81ba2ee50db6893b819a93cad032a18c6a5376182a1194b04d5fe97f8cde09f2"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.139553 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"83660cc256a7ad504cc2d9c66fbb9b6b402914d5588b76491ffa2b4469c61d38"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.283449 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.284929 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.311597 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f4102858edc6360c1144f63f692b5b440db99a1f40ca24fc7d69f1824628c4e1"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.311652 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1aed965ac59e92f1232f6a7d67ec81e4f6f7df2219d4a9d91fb66d36f70ee3eb"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.327063 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.327100 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.349618 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pp4" event={"ID":"04028e85-fcfa-4463-8279-00c5018bde40","Type":"ContainerStarted","Data":"48e7f71d69c24e7a1820e909bec70e8b059d499f3197cdb5c55639147e0c49c8"} Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.405757 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snchp"] Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.478826 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmkrm\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.496454 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zbk2"] Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.538573 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:38 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:38 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:38 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.538925 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.588946 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 07:38:38 crc kubenswrapper[4826]: W0131 07:38:38.597628 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2d9ec5ea_b382_4dfa_aa85_204a2b376589.slice/crio-9ebefce83a65e3cbbb7f0f787270e2837e346e9d0e6b51c7958bc76e27ae3a6e WatchSource:0}: Error finding container 9ebefce83a65e3cbbb7f0f787270e2837e346e9d0e6b51c7958bc76e27ae3a6e: Status 404 returned error can't find the container with id 9ebefce83a65e3cbbb7f0f787270e2837e346e9d0e6b51c7958bc76e27ae3a6e Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.604517 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czvwd"] Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.639063 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:38 crc kubenswrapper[4826]: W0131 07:38:38.642156 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0108e13b_6622_4b3c_a0b3_7e91572001aa.slice/crio-47ede3009e6008727f974352505c1a39bfac75aa4e8166b08c88a0bc65a03426 WatchSource:0}: Error finding container 47ede3009e6008727f974352505c1a39bfac75aa4e8166b08c88a0bc65a03426: Status 404 returned error can't find the container with id 47ede3009e6008727f974352505c1a39bfac75aa4e8166b08c88a0bc65a03426 Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.827290 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 07:38:38 crc kubenswrapper[4826]: I0131 07:38:38.865251 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmkrm"] Jan 31 07:38:38 crc kubenswrapper[4826]: W0131 07:38:38.923794 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd37f6dbf_56d4_46b2_8808_31999002461b.slice/crio-6aca2b5f0d3504ba25389c3bdca2688e0a258c88de2a7ed95f6cf830af0c1c08 WatchSource:0}: Error finding container 6aca2b5f0d3504ba25389c3bdca2688e0a258c88de2a7ed95f6cf830af0c1c08: Status 404 returned error can't find the container with id 6aca2b5f0d3504ba25389c3bdca2688e0a258c88de2a7ed95f6cf830af0c1c08 Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.287897 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5njx"] Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.289513 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.292801 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.304846 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5njx"] Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.351135 4826 generic.go:334] "Generic (PLEG): container finished" podID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerID="180b6159babb93b7451188e48859792fb00a5ca092d5d6007c74fd8f0cb1bb20" exitCode=0 Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.351369 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snchp" event={"ID":"3b28978e-d7a9-41b2-998a-e4a3cd62e236","Type":"ContainerDied","Data":"180b6159babb93b7451188e48859792fb00a5ca092d5d6007c74fd8f0cb1bb20"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.351800 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snchp" event={"ID":"3b28978e-d7a9-41b2-998a-e4a3cd62e236","Type":"ContainerStarted","Data":"c3bfe2f62d36dbbbdbbd1a0852fd101d9d28b339f64cf0ea025ac0c62a9efd2a"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.354109 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.354383 4826 generic.go:334] "Generic (PLEG): container finished" podID="04028e85-fcfa-4463-8279-00c5018bde40" containerID="f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c" exitCode=0 Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.354400 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pp4" event={"ID":"04028e85-fcfa-4463-8279-00c5018bde40","Type":"ContainerDied","Data":"f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.366388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" event={"ID":"abc32430-2c83-4554-a335-238833ce1a9f","Type":"ContainerStarted","Data":"d890f789b5c0e0e6ec133fc2673490bb9df58a38f54b7768f4ee39e262b084c6"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.370829 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" event={"ID":"d37f6dbf-56d4-46b2-8808-31999002461b","Type":"ContainerStarted","Data":"2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.370901 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" event={"ID":"d37f6dbf-56d4-46b2-8808-31999002461b","Type":"ContainerStarted","Data":"6aca2b5f0d3504ba25389c3bdca2688e0a258c88de2a7ed95f6cf830af0c1c08"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.371152 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.374227 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerID="ffa59a7dc46d2f742ef8798c0a8d379ec8d33e0bbf91efd7874b33f5caa82385" exitCode=0 Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.374543 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zbk2" event={"ID":"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1","Type":"ContainerDied","Data":"ffa59a7dc46d2f742ef8798c0a8d379ec8d33e0bbf91efd7874b33f5caa82385"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.374627 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zbk2" event={"ID":"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1","Type":"ContainerStarted","Data":"e0dcd743031ded61bfaf86fb7efb1b3a9cd5fcf17196d0f5b475e4e04a809745"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.379807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d9ec5ea-b382-4dfa-aa85-204a2b376589","Type":"ContainerStarted","Data":"b033419430fa0c588df2298c36a12a2e580986e968e0212c6a5c83c83c37be73"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.380386 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d9ec5ea-b382-4dfa-aa85-204a2b376589","Type":"ContainerStarted","Data":"9ebefce83a65e3cbbb7f0f787270e2837e346e9d0e6b51c7958bc76e27ae3a6e"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.383242 4826 generic.go:334] "Generic (PLEG): container finished" podID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerID="ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913" exitCode=0 Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.385376 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerDied","Data":"ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.385415 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerStarted","Data":"47ede3009e6008727f974352505c1a39bfac75aa4e8166b08c88a0bc65a03426"} Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.412809 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-utilities\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.413099 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-catalog-content\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.413188 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzm5\" (UniqueName: \"kubernetes.io/projected/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-kube-api-access-5rzm5\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.426442 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.442629 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fbprj" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.511862 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wl98f" podStartSLOduration=11.511841114 podStartE2EDuration="11.511841114s" podCreationTimestamp="2026-01-31 07:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:39.471213138 +0000 UTC m=+151.325099497" watchObservedRunningTime="2026-01-31 07:38:39.511841114 +0000 UTC m=+151.365727473" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.513930 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" podStartSLOduration=129.513923964 podStartE2EDuration="2m9.513923964s" podCreationTimestamp="2026-01-31 07:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:39.505977364 +0000 UTC m=+151.359863723" watchObservedRunningTime="2026-01-31 07:38:39.513923964 +0000 UTC m=+151.367810323" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.514589 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzm5\" (UniqueName: \"kubernetes.io/projected/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-kube-api-access-5rzm5\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.514850 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-utilities\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.515081 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-catalog-content\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.515994 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-utilities\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.516930 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-catalog-content\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.541285 4826 patch_prober.go:28] interesting pod/router-default-5444994796-7bmfc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 07:38:39 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 31 07:38:39 crc kubenswrapper[4826]: [+]process-running ok Jan 31 07:38:39 crc kubenswrapper[4826]: healthz check failed Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.541350 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7bmfc" podUID="255c8988-162e-4ae1-982f-e45cde006077" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.551195 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzm5\" (UniqueName: \"kubernetes.io/projected/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-kube-api-access-5rzm5\") pod \"redhat-marketplace-s5njx\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.578749 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.5787286590000003 podStartE2EDuration="2.578728659s" podCreationTimestamp="2026-01-31 07:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:38:39.575175826 +0000 UTC m=+151.429062185" watchObservedRunningTime="2026-01-31 07:38:39.578728659 +0000 UTC m=+151.432615018" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.579412 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-bp2zc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.579452 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bp2zc" podUID="e71acdff-33d6-4052-907b-8e38ac391f58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.579831 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-bp2zc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.579855 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bp2zc" podUID="e71acdff-33d6-4052-907b-8e38ac391f58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.580199 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.591820 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.592586 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.594138 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.594305 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.604079 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.609119 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.700398 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m8854"] Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.701644 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.715933 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8854"] Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.717909 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56458d0-b1ed-4710-9c60-937189ea61d7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.717942 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f56458d0-b1ed-4710-9c60-937189ea61d7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.818690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-catalog-content\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.819162 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56458d0-b1ed-4710-9c60-937189ea61d7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.819200 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f56458d0-b1ed-4710-9c60-937189ea61d7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.819243 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcn9h\" (UniqueName: \"kubernetes.io/projected/f7624cfd-9296-45c5-86a9-2344eb6f976e-kube-api-access-tcn9h\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.819285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-utilities\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.820193 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f56458d0-b1ed-4710-9c60-937189ea61d7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.846876 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56458d0-b1ed-4710-9c60-937189ea61d7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.911461 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.920609 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-catalog-content\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.920719 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcn9h\" (UniqueName: \"kubernetes.io/projected/f7624cfd-9296-45c5-86a9-2344eb6f976e-kube-api-access-tcn9h\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.920778 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-utilities\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.921292 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-utilities\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.921527 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-catalog-content\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:39 crc kubenswrapper[4826]: I0131 07:38:39.945132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcn9h\" (UniqueName: \"kubernetes.io/projected/f7624cfd-9296-45c5-86a9-2344eb6f976e-kube-api-access-tcn9h\") pod \"redhat-marketplace-m8854\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.016684 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.115404 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5njx"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.190487 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.291376 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scztd"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.297342 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.300113 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.303085 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scztd"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.325340 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8854"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.412837 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f56458d0-b1ed-4710-9c60-937189ea61d7","Type":"ContainerStarted","Data":"adf8390f4616424fbff2aed3902fdff0a1bc429ab8f6eede2ffefe28514f52fb"} Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.420873 4826 generic.go:334] "Generic (PLEG): container finished" podID="2d9ec5ea-b382-4dfa-aa85-204a2b376589" containerID="b033419430fa0c588df2298c36a12a2e580986e968e0212c6a5c83c83c37be73" exitCode=0 Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.421020 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d9ec5ea-b382-4dfa-aa85-204a2b376589","Type":"ContainerDied","Data":"b033419430fa0c588df2298c36a12a2e580986e968e0212c6a5c83c83c37be73"} Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.424365 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8854" event={"ID":"f7624cfd-9296-45c5-86a9-2344eb6f976e","Type":"ContainerStarted","Data":"a839334c9da6441c321854f37d624bdcfc88d9779052529309cb593e86877787"} Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.427651 4826 generic.go:334] "Generic (PLEG): container finished" podID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerID="4d51e1dafeb2852e7594de977f785507e83a0db62faec389722932621c37f67b" exitCode=0 Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.428814 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5njx" event={"ID":"1fa7f5cc-ef29-4e91-9419-2d49284c4c98","Type":"ContainerDied","Data":"4d51e1dafeb2852e7594de977f785507e83a0db62faec389722932621c37f67b"} Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.428846 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5njx" event={"ID":"1fa7f5cc-ef29-4e91-9419-2d49284c4c98","Type":"ContainerStarted","Data":"9a7afd42458735c9bb6de2fd09d57e2b0b1beb8b2f475cf8e092b0ea4ef54929"} Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.429661 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtfb\" (UniqueName: \"kubernetes.io/projected/f7510fe8-16e2-4641-8d16-18b8a1387106-kube-api-access-kmtfb\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.429686 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-utilities\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.429706 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-catalog-content\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.531524 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtfb\" (UniqueName: \"kubernetes.io/projected/f7510fe8-16e2-4641-8d16-18b8a1387106-kube-api-access-kmtfb\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.531575 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-utilities\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.531618 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-catalog-content\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.533089 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-utilities\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.533602 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.534074 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-catalog-content\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.539528 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.562159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtfb\" (UniqueName: \"kubernetes.io/projected/f7510fe8-16e2-4641-8d16-18b8a1387106-kube-api-access-kmtfb\") pod \"redhat-operators-scztd\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.619585 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.689574 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fhkwh"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.690623 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.702138 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhkwh"] Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.735281 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jlqb\" (UniqueName: \"kubernetes.io/projected/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-kube-api-access-2jlqb\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.735325 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-utilities\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.735381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-catalog-content\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.836627 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jlqb\" (UniqueName: \"kubernetes.io/projected/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-kube-api-access-2jlqb\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.837046 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-utilities\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.837107 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-catalog-content\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.837716 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-catalog-content\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.838326 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-utilities\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.856919 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.856961 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.859855 4826 patch_prober.go:28] interesting pod/console-f9d7485db-hkw8j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.859892 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hkw8j" podUID="1482e43a-84a4-42ed-a605-37cc519dd5ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 07:38:40 crc kubenswrapper[4826]: I0131 07:38:40.865035 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jlqb\" (UniqueName: \"kubernetes.io/projected/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-kube-api-access-2jlqb\") pod \"redhat-operators-fhkwh\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.058660 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.099761 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scztd"] Jan 31 07:38:41 crc kubenswrapper[4826]: W0131 07:38:41.228424 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7510fe8_16e2_4641_8d16_18b8a1387106.slice/crio-8cf0f7852f41ed42987cd0551d148a045aed15d56228a3ea043a283da48fad2c WatchSource:0}: Error finding container 8cf0f7852f41ed42987cd0551d148a045aed15d56228a3ea043a283da48fad2c: Status 404 returned error can't find the container with id 8cf0f7852f41ed42987cd0551d148a045aed15d56228a3ea043a283da48fad2c Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.258373 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.306851 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xz6vc" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.470363 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerStarted","Data":"8cf0f7852f41ed42987cd0551d148a045aed15d56228a3ea043a283da48fad2c"} Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.480784 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerID="8a731821293f0187cb4e974d8d114454e590d0904f508b60ff06f97797f2a686" exitCode=0 Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.480861 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8854" event={"ID":"f7624cfd-9296-45c5-86a9-2344eb6f976e","Type":"ContainerDied","Data":"8a731821293f0187cb4e974d8d114454e590d0904f508b60ff06f97797f2a686"} Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.487348 4826 generic.go:334] "Generic (PLEG): container finished" podID="f56458d0-b1ed-4710-9c60-937189ea61d7" containerID="568a5e729051751b411732f405ad072de4e4d6d6808cb885dabe9a618805eec6" exitCode=0 Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.488161 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f56458d0-b1ed-4710-9c60-937189ea61d7","Type":"ContainerDied","Data":"568a5e729051751b411732f405ad072de4e4d6d6808cb885dabe9a618805eec6"} Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.491665 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7bmfc" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.492957 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhkwh"] Jan 31 07:38:41 crc kubenswrapper[4826]: W0131 07:38:41.568152 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3a7050_24a4_4f99_ac44_c822d68d5ba5.slice/crio-3ff6611c2538c2e47fffecb92d713e87bf58f5bb61b0352b995e5474e57b5328 WatchSource:0}: Error finding container 3ff6611c2538c2e47fffecb92d713e87bf58f5bb61b0352b995e5474e57b5328: Status 404 returned error can't find the container with id 3ff6611c2538c2e47fffecb92d713e87bf58f5bb61b0352b995e5474e57b5328 Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.833908 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.961932 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kubelet-dir\") pod \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.962163 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kube-api-access\") pod \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\" (UID: \"2d9ec5ea-b382-4dfa-aa85-204a2b376589\") " Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.962163 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d9ec5ea-b382-4dfa-aa85-204a2b376589" (UID: "2d9ec5ea-b382-4dfa-aa85-204a2b376589"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.962402 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:41 crc kubenswrapper[4826]: I0131 07:38:41.972508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d9ec5ea-b382-4dfa-aa85-204a2b376589" (UID: "2d9ec5ea-b382-4dfa-aa85-204a2b376589"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.064390 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d9ec5ea-b382-4dfa-aa85-204a2b376589-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.526896 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d9ec5ea-b382-4dfa-aa85-204a2b376589","Type":"ContainerDied","Data":"9ebefce83a65e3cbbb7f0f787270e2837e346e9d0e6b51c7958bc76e27ae3a6e"} Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.526940 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebefce83a65e3cbbb7f0f787270e2837e346e9d0e6b51c7958bc76e27ae3a6e" Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.526958 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.593629 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerID="3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4" exitCode=0 Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.593742 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerDied","Data":"3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4"} Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.609089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerStarted","Data":"ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb"} Jan 31 07:38:42 crc kubenswrapper[4826]: I0131 07:38:42.609128 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerStarted","Data":"3ff6611c2538c2e47fffecb92d713e87bf58f5bb61b0352b995e5474e57b5328"} Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.110518 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.184422 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f56458d0-b1ed-4710-9c60-937189ea61d7-kubelet-dir\") pod \"f56458d0-b1ed-4710-9c60-937189ea61d7\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.184888 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56458d0-b1ed-4710-9c60-937189ea61d7-kube-api-access\") pod \"f56458d0-b1ed-4710-9c60-937189ea61d7\" (UID: \"f56458d0-b1ed-4710-9c60-937189ea61d7\") " Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.186150 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56458d0-b1ed-4710-9c60-937189ea61d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f56458d0-b1ed-4710-9c60-937189ea61d7" (UID: "f56458d0-b1ed-4710-9c60-937189ea61d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.198591 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56458d0-b1ed-4710-9c60-937189ea61d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f56458d0-b1ed-4710-9c60-937189ea61d7" (UID: "f56458d0-b1ed-4710-9c60-937189ea61d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.286424 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56458d0-b1ed-4710-9c60-937189ea61d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.286453 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f56458d0-b1ed-4710-9c60-937189ea61d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.633525 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f56458d0-b1ed-4710-9c60-937189ea61d7","Type":"ContainerDied","Data":"adf8390f4616424fbff2aed3902fdff0a1bc429ab8f6eede2ffefe28514f52fb"} Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.633604 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf8390f4616424fbff2aed3902fdff0a1bc429ab8f6eede2ffefe28514f52fb" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.633567 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.637632 4826 generic.go:334] "Generic (PLEG): container finished" podID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerID="ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb" exitCode=0 Jan 31 07:38:43 crc kubenswrapper[4826]: I0131 07:38:43.637661 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerDied","Data":"ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb"} Jan 31 07:38:45 crc kubenswrapper[4826]: I0131 07:38:45.337954 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:38:46 crc kubenswrapper[4826]: I0131 07:38:46.351207 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p6c7j" Jan 31 07:38:49 crc kubenswrapper[4826]: I0131 07:38:49.584939 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bp2zc" Jan 31 07:38:50 crc kubenswrapper[4826]: I0131 07:38:50.881451 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:50 crc kubenswrapper[4826]: I0131 07:38:50.885386 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:38:52 crc kubenswrapper[4826]: I0131 07:38:52.025723 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:52 crc kubenswrapper[4826]: I0131 07:38:52.031993 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/251ad51e-c383-4684-bfdb-2b9ce8098cc6-metrics-certs\") pod \"network-metrics-daemon-qrw7j\" (UID: \"251ad51e-c383-4684-bfdb-2b9ce8098cc6\") " pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:52 crc kubenswrapper[4826]: I0131 07:38:52.243796 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrw7j" Jan 31 07:38:57 crc kubenswrapper[4826]: I0131 07:38:57.377724 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:38:57 crc kubenswrapper[4826]: I0131 07:38:57.378165 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:38:57 crc kubenswrapper[4826]: I0131 07:38:57.459147 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9x5mr"] Jan 31 07:38:57 crc kubenswrapper[4826]: I0131 07:38:57.459483 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" containerID="cri-o://58f5a857e9b00c52b31d7e28cbce2cce1678920e9484e4cdf390581d0bdcd2ce" gracePeriod=30 Jan 31 07:38:57 crc kubenswrapper[4826]: I0131 07:38:57.480698 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm"] Jan 31 07:38:57 crc kubenswrapper[4826]: I0131 07:38:57.481058 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" containerID="cri-o://94cb17d7b0aefe4b2c18328732cb35b0bec0085e553f5f9d565aa9568a2f6d95" gracePeriod=30 Jan 31 07:38:58 crc kubenswrapper[4826]: I0131 07:38:58.647451 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.574252 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9x5mr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.574917 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.594398 4826 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xvkpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.594499 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.814876 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerID="58f5a857e9b00c52b31d7e28cbce2cce1678920e9484e4cdf390581d0bdcd2ce" exitCode=0 Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.814946 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" event={"ID":"2ba35eb2-6b7b-45bc-827a-b7c3b5266073","Type":"ContainerDied","Data":"58f5a857e9b00c52b31d7e28cbce2cce1678920e9484e4cdf390581d0bdcd2ce"} Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.816074 4826 generic.go:334] "Generic (PLEG): container finished" podID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerID="94cb17d7b0aefe4b2c18328732cb35b0bec0085e553f5f9d565aa9568a2f6d95" exitCode=0 Jan 31 07:38:59 crc kubenswrapper[4826]: I0131 07:38:59.816098 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" event={"ID":"138c4519-aea9-40d6-a633-afe9fe199a6d","Type":"ContainerDied","Data":"94cb17d7b0aefe4b2c18328732cb35b0bec0085e553f5f9d565aa9568a2f6d95"} Jan 31 07:39:07 crc kubenswrapper[4826]: I0131 07:39:07.134034 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 07:39:10 crc kubenswrapper[4826]: I0131 07:39:10.574539 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9x5mr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:39:10 crc kubenswrapper[4826]: I0131 07:39:10.575399 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:39:10 crc kubenswrapper[4826]: I0131 07:39:10.593596 4826 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xvkpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 07:39:10 crc kubenswrapper[4826]: I0131 07:39:10.593719 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 07:39:11 crc kubenswrapper[4826]: I0131 07:39:11.018353 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-64lqw" Jan 31 07:39:13 crc kubenswrapper[4826]: E0131 07:39:13.602900 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 07:39:13 crc kubenswrapper[4826]: E0131 07:39:13.603713 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jlqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fhkwh_openshift-marketplace(2f3a7050-24a4-4f99-ac44-c822d68d5ba5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:13 crc kubenswrapper[4826]: E0131 07:39:13.604929 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fhkwh" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" Jan 31 07:39:13 crc kubenswrapper[4826]: E0131 07:39:13.629242 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 07:39:13 crc kubenswrapper[4826]: E0131 07:39:13.629427 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmtfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-scztd_openshift-marketplace(f7510fe8-16e2-4641-8d16-18b8a1387106): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:13 crc kubenswrapper[4826]: E0131 07:39:13.630684 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-scztd" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" Jan 31 07:39:14 crc kubenswrapper[4826]: E0131 07:39:14.990165 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fhkwh" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" Jan 31 07:39:14 crc kubenswrapper[4826]: E0131 07:39:14.990688 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-scztd" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" Jan 31 07:39:15 crc kubenswrapper[4826]: E0131 07:39:15.055260 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 07:39:15 crc kubenswrapper[4826]: E0131 07:39:15.055457 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5btnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-snchp_openshift-marketplace(3b28978e-d7a9-41b2-998a-e4a3cd62e236): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:15 crc kubenswrapper[4826]: E0131 07:39:15.056669 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-snchp" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.333948 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-snchp" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.411493 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.412011 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcn9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m8854_openshift-marketplace(f7624cfd-9296-45c5-86a9-2344eb6f976e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.413123 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m8854" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.414257 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.418431 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.418588 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b52hz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-czvwd_openshift-marketplace(0108e13b-6622-4b3c-a0b3-7e91572001aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.422161 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-czvwd" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.422575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52nkr\" (UniqueName: \"kubernetes.io/projected/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-kube-api-access-52nkr\") pod \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.422610 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-proxy-ca-bundles\") pod \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.422628 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-client-ca\") pod \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.422725 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-serving-cert\") pod \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.422816 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-config\") pod \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\" (UID: \"2ba35eb2-6b7b-45bc-827a-b7c3b5266073\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.423518 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ba35eb2-6b7b-45bc-827a-b7c3b5266073" (UID: "2ba35eb2-6b7b-45bc-827a-b7c3b5266073"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.423674 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-config" (OuterVolumeSpecName: "config") pod "2ba35eb2-6b7b-45bc-827a-b7c3b5266073" (UID: "2ba35eb2-6b7b-45bc-827a-b7c3b5266073"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.424009 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2ba35eb2-6b7b-45bc-827a-b7c3b5266073" (UID: "2ba35eb2-6b7b-45bc-827a-b7c3b5266073"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.428570 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ba35eb2-6b7b-45bc-827a-b7c3b5266073" (UID: "2ba35eb2-6b7b-45bc-827a-b7c3b5266073"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.434205 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-kube-api-access-52nkr" (OuterVolumeSpecName: "kube-api-access-52nkr") pod "2ba35eb2-6b7b-45bc-827a-b7c3b5266073" (UID: "2ba35eb2-6b7b-45bc-827a-b7c3b5266073"). InnerVolumeSpecName "kube-api-access-52nkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.434786 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.446577 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz"] Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.446868 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9ec5ea-b382-4dfa-aa85-204a2b376589" containerName="pruner" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.446880 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9ec5ea-b382-4dfa-aa85-204a2b376589" containerName="pruner" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.446891 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.446897 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.446934 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.446941 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.446950 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56458d0-b1ed-4710-9c60-937189ea61d7" containerName="pruner" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.446956 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56458d0-b1ed-4710-9c60-937189ea61d7" containerName="pruner" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.447103 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" containerName="route-controller-manager" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.447115 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56458d0-b1ed-4710-9c60-937189ea61d7" containerName="pruner" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.447122 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9ec5ea-b382-4dfa-aa85-204a2b376589" containerName="pruner" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.447149 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" containerName="controller-manager" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.447587 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.463177 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz"] Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.466910 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.467083 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfjcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w2pp4_openshift-marketplace(04028e85-fcfa-4463-8279-00c5018bde40): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.469047 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w2pp4" podUID="04028e85-fcfa-4463-8279-00c5018bde40" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.511543 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.511704 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f2mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2zbk2_openshift-marketplace(ad75cbef-01f1-46ef-bfe2-d1e864e2efe1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.513371 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2zbk2" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523345 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138c4519-aea9-40d6-a633-afe9fe199a6d-serving-cert\") pod \"138c4519-aea9-40d6-a633-afe9fe199a6d\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523390 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dst4\" (UniqueName: \"kubernetes.io/projected/138c4519-aea9-40d6-a633-afe9fe199a6d-kube-api-access-9dst4\") pod \"138c4519-aea9-40d6-a633-afe9fe199a6d\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523416 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-client-ca\") pod \"138c4519-aea9-40d6-a633-afe9fe199a6d\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523453 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-config\") pod \"138c4519-aea9-40d6-a633-afe9fe199a6d\" (UID: \"138c4519-aea9-40d6-a633-afe9fe199a6d\") " Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523629 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523654 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-client-ca\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523724 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-config\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-serving-cert\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523787 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2kp\" (UniqueName: \"kubernetes.io/projected/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-kube-api-access-4n2kp\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523837 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523851 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52nkr\" (UniqueName: \"kubernetes.io/projected/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-kube-api-access-52nkr\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523865 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523876 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.523886 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba35eb2-6b7b-45bc-827a-b7c3b5266073-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.525275 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "138c4519-aea9-40d6-a633-afe9fe199a6d" (UID: "138c4519-aea9-40d6-a633-afe9fe199a6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.525373 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-config" (OuterVolumeSpecName: "config") pod "138c4519-aea9-40d6-a633-afe9fe199a6d" (UID: "138c4519-aea9-40d6-a633-afe9fe199a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.528294 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138c4519-aea9-40d6-a633-afe9fe199a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "138c4519-aea9-40d6-a633-afe9fe199a6d" (UID: "138c4519-aea9-40d6-a633-afe9fe199a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.530863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138c4519-aea9-40d6-a633-afe9fe199a6d-kube-api-access-9dst4" (OuterVolumeSpecName: "kube-api-access-9dst4") pod "138c4519-aea9-40d6-a633-afe9fe199a6d" (UID: "138c4519-aea9-40d6-a633-afe9fe199a6d"). InnerVolumeSpecName "kube-api-access-9dst4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624389 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624432 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-client-ca\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624483 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-config\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624518 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-serving-cert\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624546 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2kp\" (UniqueName: \"kubernetes.io/projected/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-kube-api-access-4n2kp\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624584 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138c4519-aea9-40d6-a633-afe9fe199a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624594 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dst4\" (UniqueName: \"kubernetes.io/projected/138c4519-aea9-40d6-a633-afe9fe199a6d-kube-api-access-9dst4\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624604 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.624612 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138c4519-aea9-40d6-a633-afe9fe199a6d-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.625557 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.625653 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-client-ca\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.626953 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-config\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.632294 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-serving-cert\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.640134 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2kp\" (UniqueName: \"kubernetes.io/projected/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-kube-api-access-4n2kp\") pod \"controller-manager-5d76f5f7f7-5dnhz\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.769582 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qrw7j"] Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.795465 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.916388 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.916444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9x5mr" event={"ID":"2ba35eb2-6b7b-45bc-827a-b7c3b5266073","Type":"ContainerDied","Data":"4683446f94428df96de00e6ad956e4b3d8e8defec318042d4e9e721904fe29fa"} Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.916501 4826 scope.go:117] "RemoveContainer" containerID="58f5a857e9b00c52b31d7e28cbce2cce1678920e9484e4cdf390581d0bdcd2ce" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.920568 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" event={"ID":"251ad51e-c383-4684-bfdb-2b9ce8098cc6","Type":"ContainerStarted","Data":"225c2ab5dea23b5fe64c4bc91cf85c711f53541e3630b76579b867ebf653a95d"} Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.924575 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" event={"ID":"138c4519-aea9-40d6-a633-afe9fe199a6d","Type":"ContainerDied","Data":"eea64576e652fab654ec0b77d67f397aa06219897a2a0be7051fe57f05feb904"} Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.924661 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm" Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.938775 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9x5mr"] Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.939379 4826 generic.go:334] "Generic (PLEG): container finished" podID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerID="759a80d9ceaccc242155d6262914bf92debcdfdc824f1fc721b2ac080c349018" exitCode=0 Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.939468 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5njx" event={"ID":"1fa7f5cc-ef29-4e91-9419-2d49284c4c98","Type":"ContainerDied","Data":"759a80d9ceaccc242155d6262914bf92debcdfdc824f1fc721b2ac080c349018"} Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.946704 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9x5mr"] Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.950571 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm"] Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.954460 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvkpm"] Jan 31 07:39:16 crc kubenswrapper[4826]: I0131 07:39:16.955860 4826 scope.go:117] "RemoveContainer" containerID="94cb17d7b0aefe4b2c18328732cb35b0bec0085e553f5f9d565aa9568a2f6d95" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.956472 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w2pp4" podUID="04028e85-fcfa-4463-8279-00c5018bde40" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.956780 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-czvwd" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.962197 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m8854" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" Jan 31 07:39:16 crc kubenswrapper[4826]: E0131 07:39:16.962276 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2zbk2" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.241545 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz"] Jan 31 07:39:17 crc kubenswrapper[4826]: W0131 07:39:17.252074 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13a2ffbf_8e57_4b66_98ca_5e3d9c39c953.slice/crio-ffb73e4fa941abc397140262f7b17d506460e0f39a180327622968d664dace89 WatchSource:0}: Error finding container ffb73e4fa941abc397140262f7b17d506460e0f39a180327622968d664dace89: Status 404 returned error can't find the container with id ffb73e4fa941abc397140262f7b17d506460e0f39a180327622968d664dace89 Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.434054 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz"] Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.559659 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv"] Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.560527 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.562780 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.562906 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.562998 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.563149 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.563158 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.569225 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.576765 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv"] Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.743248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7d7z\" (UniqueName: \"kubernetes.io/projected/9403e838-1048-4b1f-8e19-c26ac04c2d0f-kube-api-access-q7d7z\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.743323 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9403e838-1048-4b1f-8e19-c26ac04c2d0f-serving-cert\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.743357 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-config\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.743611 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-client-ca\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.844851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-client-ca\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.844949 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7d7z\" (UniqueName: \"kubernetes.io/projected/9403e838-1048-4b1f-8e19-c26ac04c2d0f-kube-api-access-q7d7z\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.845013 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9403e838-1048-4b1f-8e19-c26ac04c2d0f-serving-cert\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.845069 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-config\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.845724 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-client-ca\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.847048 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-config\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.851909 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9403e838-1048-4b1f-8e19-c26ac04c2d0f-serving-cert\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.871716 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7d7z\" (UniqueName: \"kubernetes.io/projected/9403e838-1048-4b1f-8e19-c26ac04c2d0f-kube-api-access-q7d7z\") pod \"route-controller-manager-869ff56f57-q59fv\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.874846 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.982626 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" event={"ID":"251ad51e-c383-4684-bfdb-2b9ce8098cc6","Type":"ContainerStarted","Data":"3ba1006a6de9a2cdbfec65f381bfe60896669ac28d8c2eb37e2c3bc51841ca1b"} Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.982884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qrw7j" event={"ID":"251ad51e-c383-4684-bfdb-2b9ce8098cc6","Type":"ContainerStarted","Data":"7812169fba1d5c4e5cbd4730368cff281fa23f190ad70981332516ed14c5090a"} Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.988937 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" event={"ID":"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953","Type":"ContainerStarted","Data":"8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05"} Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.988985 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" event={"ID":"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953","Type":"ContainerStarted","Data":"ffb73e4fa941abc397140262f7b17d506460e0f39a180327622968d664dace89"} Jan 31 07:39:17 crc kubenswrapper[4826]: I0131 07:39:17.989804 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.006815 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5njx" event={"ID":"1fa7f5cc-ef29-4e91-9419-2d49284c4c98","Type":"ContainerStarted","Data":"a62b0f68e48181a7ccc6eceb5dcf274388984a9f63c0daeb07d9531f41ea37a3"} Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.007979 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qrw7j" podStartSLOduration=169.007950445 podStartE2EDuration="2m49.007950445s" podCreationTimestamp="2026-01-31 07:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:18.00604748 +0000 UTC m=+189.859933839" watchObservedRunningTime="2026-01-31 07:39:18.007950445 +0000 UTC m=+189.861836804" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.008160 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.082879 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" podStartSLOduration=21.082853822 podStartE2EDuration="21.082853822s" podCreationTimestamp="2026-01-31 07:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:18.047319284 +0000 UTC m=+189.901205663" watchObservedRunningTime="2026-01-31 07:39:18.082853822 +0000 UTC m=+189.936740181" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.083259 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5njx" podStartSLOduration=2.086088386 podStartE2EDuration="39.083254194s" podCreationTimestamp="2026-01-31 07:38:39 +0000 UTC" firstStartedPulling="2026-01-31 07:38:40.431653233 +0000 UTC m=+152.285539592" lastFinishedPulling="2026-01-31 07:39:17.428819041 +0000 UTC m=+189.282705400" observedRunningTime="2026-01-31 07:39:18.078514646 +0000 UTC m=+189.932401015" watchObservedRunningTime="2026-01-31 07:39:18.083254194 +0000 UTC m=+189.937140553" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.160565 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv"] Jan 31 07:39:18 crc kubenswrapper[4826]: W0131 07:39:18.167315 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9403e838_1048_4b1f_8e19_c26ac04c2d0f.slice/crio-8f2e827fc5bbface22739b052ec183038370e952c9fe4d7379610fba73bdb87a WatchSource:0}: Error finding container 8f2e827fc5bbface22739b052ec183038370e952c9fe4d7379610fba73bdb87a: Status 404 returned error can't find the container with id 8f2e827fc5bbface22739b052ec183038370e952c9fe4d7379610fba73bdb87a Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.588041 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.596035 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.599533 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.601927 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.602290 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.660038 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0684ef-3a8c-478d-9a63-529919ab83ce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.660112 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0684ef-3a8c-478d-9a63-529919ab83ce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.761919 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0684ef-3a8c-478d-9a63-529919ab83ce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.762136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0684ef-3a8c-478d-9a63-529919ab83ce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.762283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0684ef-3a8c-478d-9a63-529919ab83ce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.780818 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0684ef-3a8c-478d-9a63-529919ab83ce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.818582 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="138c4519-aea9-40d6-a633-afe9fe199a6d" path="/var/lib/kubelet/pods/138c4519-aea9-40d6-a633-afe9fe199a6d/volumes" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.819554 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba35eb2-6b7b-45bc-827a-b7c3b5266073" path="/var/lib/kubelet/pods/2ba35eb2-6b7b-45bc-827a-b7c3b5266073/volumes" Jan 31 07:39:18 crc kubenswrapper[4826]: I0131 07:39:18.920093 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.033288 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" event={"ID":"9403e838-1048-4b1f-8e19-c26ac04c2d0f","Type":"ContainerStarted","Data":"b8f4c5b13e34e98bc4b2846b7dd2189097cda48b6259d7d2475d880afe4dd00c"} Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.033691 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" event={"ID":"9403e838-1048-4b1f-8e19-c26ac04c2d0f","Type":"ContainerStarted","Data":"8f2e827fc5bbface22739b052ec183038370e952c9fe4d7379610fba73bdb87a"} Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.034132 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.034729 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" podUID="13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" containerName="controller-manager" containerID="cri-o://8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05" gracePeriod=30 Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.048849 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.061189 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" podStartSLOduration=2.061169344 podStartE2EDuration="2.061169344s" podCreationTimestamp="2026-01-31 07:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:19.050762913 +0000 UTC m=+190.904649272" watchObservedRunningTime="2026-01-31 07:39:19.061169344 +0000 UTC m=+190.915055703" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.131685 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.418212 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.447188 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn"] Jan 31 07:39:19 crc kubenswrapper[4826]: E0131 07:39:19.447422 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" containerName="controller-manager" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.447433 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" containerName="controller-manager" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.447528 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" containerName="controller-manager" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.447917 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.459452 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn"] Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.474234 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-proxy-ca-bundles\") pod \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.475110 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-client-ca\") pod \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.475278 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2kp\" (UniqueName: \"kubernetes.io/projected/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-kube-api-access-4n2kp\") pod \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.475551 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-config\") pod \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.475721 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" (UID: "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.475762 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-serving-cert\") pod \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\" (UID: \"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953\") " Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.477027 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-client-ca\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.477178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add3c875-4992-422d-a16c-55e6863e38ad-serving-cert\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.477339 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-config\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.477416 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnbjv\" (UniqueName: \"kubernetes.io/projected/add3c875-4992-422d-a16c-55e6863e38ad-kube-api-access-fnbjv\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.477479 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-proxy-ca-bundles\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.477877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-config" (OuterVolumeSpecName: "config") pod "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" (UID: "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.478117 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.478751 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-client-ca" (OuterVolumeSpecName: "client-ca") pod "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" (UID: "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.482446 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" (UID: "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.482549 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-kube-api-access-4n2kp" (OuterVolumeSpecName: "kube-api-access-4n2kp") pod "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" (UID: "13a2ffbf-8e57-4b66-98ca-5e3d9c39c953"). InnerVolumeSpecName "kube-api-access-4n2kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.579771 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-config\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.579900 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnbjv\" (UniqueName: \"kubernetes.io/projected/add3c875-4992-422d-a16c-55e6863e38ad-kube-api-access-fnbjv\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.579950 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-proxy-ca-bundles\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.580004 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-client-ca\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.580035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add3c875-4992-422d-a16c-55e6863e38ad-serving-cert\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.580095 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.580111 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.580123 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2kp\" (UniqueName: \"kubernetes.io/projected/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-kube-api-access-4n2kp\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.580136 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.581004 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-client-ca\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.581132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-proxy-ca-bundles\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.581600 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-config\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.583620 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add3c875-4992-422d-a16c-55e6863e38ad-serving-cert\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.595368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnbjv\" (UniqueName: \"kubernetes.io/projected/add3c875-4992-422d-a16c-55e6863e38ad-kube-api-access-fnbjv\") pod \"controller-manager-6999fbfcd8-hj5bn\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.604514 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.604585 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:39:19 crc kubenswrapper[4826]: I0131 07:39:19.776878 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.038388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb0684ef-3a8c-478d-9a63-529919ab83ce","Type":"ContainerStarted","Data":"2a6ba2b190d168f55774b8c794200614e1d0edc66bbe5f347c8a6b44b1a95839"} Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.038762 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb0684ef-3a8c-478d-9a63-529919ab83ce","Type":"ContainerStarted","Data":"b38e25e7cf2e7f8e481da652aede27d8b602baf79faad52196105b4a02422d92"} Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.039824 4826 generic.go:334] "Generic (PLEG): container finished" podID="13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" containerID="8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05" exitCode=0 Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.039900 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.039890 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" event={"ID":"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953","Type":"ContainerDied","Data":"8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05"} Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.040062 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz" event={"ID":"13a2ffbf-8e57-4b66-98ca-5e3d9c39c953","Type":"ContainerDied","Data":"ffb73e4fa941abc397140262f7b17d506460e0f39a180327622968d664dace89"} Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.040084 4826 scope.go:117] "RemoveContainer" containerID="8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05" Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.063956 4826 scope.go:117] "RemoveContainer" containerID="8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05" Jan 31 07:39:20 crc kubenswrapper[4826]: E0131 07:39:20.064543 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05\": container with ID starting with 8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05 not found: ID does not exist" containerID="8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05" Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.064609 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05"} err="failed to get container status \"8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05\": rpc error: code = NotFound desc = could not find container \"8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05\": container with ID starting with 8268855a0538a382676d586c0ea8e173b5cf03803bfb53e9e8cb695e195f1e05 not found: ID does not exist" Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.086894 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.086862256 podStartE2EDuration="2.086862256s" podCreationTimestamp="2026-01-31 07:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:20.050892936 +0000 UTC m=+191.904779305" watchObservedRunningTime="2026-01-31 07:39:20.086862256 +0000 UTC m=+191.940748615" Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.087734 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz"] Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.090472 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-5dnhz"] Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.179810 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn"] Jan 31 07:39:20 crc kubenswrapper[4826]: W0131 07:39:20.230487 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd3c875_4992_422d_a16c_55e6863e38ad.slice/crio-062ae6fe99164f247280a252ae6fe3438592a4b9aaaa42bdbf802da7cb80dc31 WatchSource:0}: Error finding container 062ae6fe99164f247280a252ae6fe3438592a4b9aaaa42bdbf802da7cb80dc31: Status 404 returned error can't find the container with id 062ae6fe99164f247280a252ae6fe3438592a4b9aaaa42bdbf802da7cb80dc31 Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.721098 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-s5njx" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="registry-server" probeResult="failure" output=< Jan 31 07:39:20 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 07:39:20 crc kubenswrapper[4826]: > Jan 31 07:39:20 crc kubenswrapper[4826]: I0131 07:39:20.815839 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a2ffbf-8e57-4b66-98ca-5e3d9c39c953" path="/var/lib/kubelet/pods/13a2ffbf-8e57-4b66-98ca-5e3d9c39c953/volumes" Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.046799 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb0684ef-3a8c-478d-9a63-529919ab83ce" containerID="2a6ba2b190d168f55774b8c794200614e1d0edc66bbe5f347c8a6b44b1a95839" exitCode=0 Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.046857 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb0684ef-3a8c-478d-9a63-529919ab83ce","Type":"ContainerDied","Data":"2a6ba2b190d168f55774b8c794200614e1d0edc66bbe5f347c8a6b44b1a95839"} Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.049845 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" event={"ID":"add3c875-4992-422d-a16c-55e6863e38ad","Type":"ContainerStarted","Data":"61c02f6b370efe33bb6d612925773ad663e925c21b28750305aeb6e67f3a6942"} Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.049888 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" event={"ID":"add3c875-4992-422d-a16c-55e6863e38ad","Type":"ContainerStarted","Data":"062ae6fe99164f247280a252ae6fe3438592a4b9aaaa42bdbf802da7cb80dc31"} Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.052061 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.058816 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:21 crc kubenswrapper[4826]: I0131 07:39:21.093828 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" podStartSLOduration=4.093805367 podStartE2EDuration="4.093805367s" podCreationTimestamp="2026-01-31 07:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:21.090051758 +0000 UTC m=+192.943938137" watchObservedRunningTime="2026-01-31 07:39:21.093805367 +0000 UTC m=+192.947691726" Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.327703 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.413272 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0684ef-3a8c-478d-9a63-529919ab83ce-kubelet-dir\") pod \"eb0684ef-3a8c-478d-9a63-529919ab83ce\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.413372 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0684ef-3a8c-478d-9a63-529919ab83ce-kube-api-access\") pod \"eb0684ef-3a8c-478d-9a63-529919ab83ce\" (UID: \"eb0684ef-3a8c-478d-9a63-529919ab83ce\") " Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.413364 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb0684ef-3a8c-478d-9a63-529919ab83ce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb0684ef-3a8c-478d-9a63-529919ab83ce" (UID: "eb0684ef-3a8c-478d-9a63-529919ab83ce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.414558 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0684ef-3a8c-478d-9a63-529919ab83ce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.419223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0684ef-3a8c-478d-9a63-529919ab83ce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb0684ef-3a8c-478d-9a63-529919ab83ce" (UID: "eb0684ef-3a8c-478d-9a63-529919ab83ce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:22 crc kubenswrapper[4826]: I0131 07:39:22.516252 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0684ef-3a8c-478d-9a63-529919ab83ce-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.068428 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb0684ef-3a8c-478d-9a63-529919ab83ce","Type":"ContainerDied","Data":"b38e25e7cf2e7f8e481da652aede27d8b602baf79faad52196105b4a02422d92"} Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.068474 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.068503 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38e25e7cf2e7f8e481da652aede27d8b602baf79faad52196105b4a02422d92" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.789514 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 07:39:23 crc kubenswrapper[4826]: E0131 07:39:23.789978 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0684ef-3a8c-478d-9a63-529919ab83ce" containerName="pruner" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.789990 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0684ef-3a8c-478d-9a63-529919ab83ce" containerName="pruner" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.790094 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0684ef-3a8c-478d-9a63-529919ab83ce" containerName="pruner" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.790513 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.795355 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.795517 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.799488 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.833747 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-var-lock\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.833802 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c10690e0-2f15-4102-93fc-caffd46cd9cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.833846 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.935547 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-var-lock\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.935605 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c10690e0-2f15-4102-93fc-caffd46cd9cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.935642 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.935723 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.936181 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-var-lock\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:23 crc kubenswrapper[4826]: I0131 07:39:23.954040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c10690e0-2f15-4102-93fc-caffd46cd9cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:24 crc kubenswrapper[4826]: I0131 07:39:24.116474 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:39:24 crc kubenswrapper[4826]: I0131 07:39:24.324657 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 07:39:25 crc kubenswrapper[4826]: I0131 07:39:25.081755 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c10690e0-2f15-4102-93fc-caffd46cd9cc","Type":"ContainerStarted","Data":"d2a08a6588c5d35ae6a61fc305afeadfa8b1862126b86b88126c59ce302dd8d1"} Jan 31 07:39:26 crc kubenswrapper[4826]: I0131 07:39:26.088031 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c10690e0-2f15-4102-93fc-caffd46cd9cc","Type":"ContainerStarted","Data":"7abd79bf41c8eaece424af6ebf0e655cf4a59719ce2520caaba58d95ce0dffa6"} Jan 31 07:39:27 crc kubenswrapper[4826]: I0131 07:39:27.111131 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.111113007 podStartE2EDuration="4.111113007s" podCreationTimestamp="2026-01-31 07:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:27.109819779 +0000 UTC m=+198.963706138" watchObservedRunningTime="2026-01-31 07:39:27.111113007 +0000 UTC m=+198.964999366" Jan 31 07:39:27 crc kubenswrapper[4826]: I0131 07:39:27.376809 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:39:27 crc kubenswrapper[4826]: I0131 07:39:27.378138 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:39:29 crc kubenswrapper[4826]: I0131 07:39:29.675141 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:39:29 crc kubenswrapper[4826]: I0131 07:39:29.727091 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:39:33 crc kubenswrapper[4826]: I0131 07:39:33.136397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerStarted","Data":"53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2"} Jan 31 07:39:33 crc kubenswrapper[4826]: I0131 07:39:33.139321 4826 generic.go:334] "Generic (PLEG): container finished" podID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerID="d44e0210688039b02a34f3fed7d1bf9d640bd0aefd462112d88a44c2ce93f0c8" exitCode=0 Jan 31 07:39:33 crc kubenswrapper[4826]: I0131 07:39:33.139373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snchp" event={"ID":"3b28978e-d7a9-41b2-998a-e4a3cd62e236","Type":"ContainerDied","Data":"d44e0210688039b02a34f3fed7d1bf9d640bd0aefd462112d88a44c2ce93f0c8"} Jan 31 07:39:33 crc kubenswrapper[4826]: I0131 07:39:33.143646 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerStarted","Data":"0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b"} Jan 31 07:39:33 crc kubenswrapper[4826]: I0131 07:39:33.150370 4826 generic.go:334] "Generic (PLEG): container finished" podID="04028e85-fcfa-4463-8279-00c5018bde40" containerID="db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f" exitCode=0 Jan 31 07:39:33 crc kubenswrapper[4826]: I0131 07:39:33.150410 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pp4" event={"ID":"04028e85-fcfa-4463-8279-00c5018bde40","Type":"ContainerDied","Data":"db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f"} Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.158164 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerID="56cd252c9eb6b901c27ffba98e7b92042293c81abf283dd2177815c5078fc3f8" exitCode=0 Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.158248 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zbk2" event={"ID":"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1","Type":"ContainerDied","Data":"56cd252c9eb6b901c27ffba98e7b92042293c81abf283dd2177815c5078fc3f8"} Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.161886 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerID="53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2" exitCode=0 Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.162040 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerDied","Data":"53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2"} Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.164078 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerID="9763930600d59c722fe0a05e8f2d73ec4ca81a0af4b7571dfdf6ff2cbdd67d14" exitCode=0 Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.164139 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8854" event={"ID":"f7624cfd-9296-45c5-86a9-2344eb6f976e","Type":"ContainerDied","Data":"9763930600d59c722fe0a05e8f2d73ec4ca81a0af4b7571dfdf6ff2cbdd67d14"} Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.167338 4826 generic.go:334] "Generic (PLEG): container finished" podID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerID="0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b" exitCode=0 Jan 31 07:39:34 crc kubenswrapper[4826]: I0131 07:39:34.167388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerDied","Data":"0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b"} Jan 31 07:39:37 crc kubenswrapper[4826]: I0131 07:39:37.438859 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn"] Jan 31 07:39:37 crc kubenswrapper[4826]: I0131 07:39:37.439475 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" podUID="add3c875-4992-422d-a16c-55e6863e38ad" containerName="controller-manager" containerID="cri-o://61c02f6b370efe33bb6d612925773ad663e925c21b28750305aeb6e67f3a6942" gracePeriod=30 Jan 31 07:39:37 crc kubenswrapper[4826]: I0131 07:39:37.468874 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv"] Jan 31 07:39:37 crc kubenswrapper[4826]: I0131 07:39:37.469763 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" podUID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" containerName="route-controller-manager" containerID="cri-o://b8f4c5b13e34e98bc4b2846b7dd2189097cda48b6259d7d2475d880afe4dd00c" gracePeriod=30 Jan 31 07:39:37 crc kubenswrapper[4826]: I0131 07:39:37.875810 4826 patch_prober.go:28] interesting pod/route-controller-manager-869ff56f57-q59fv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 31 07:39:37 crc kubenswrapper[4826]: I0131 07:39:37.875916 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" podUID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.219388 4826 generic.go:334] "Generic (PLEG): container finished" podID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" containerID="b8f4c5b13e34e98bc4b2846b7dd2189097cda48b6259d7d2475d880afe4dd00c" exitCode=0 Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.219820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" event={"ID":"9403e838-1048-4b1f-8e19-c26ac04c2d0f","Type":"ContainerDied","Data":"b8f4c5b13e34e98bc4b2846b7dd2189097cda48b6259d7d2475d880afe4dd00c"} Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.223395 4826 generic.go:334] "Generic (PLEG): container finished" podID="add3c875-4992-422d-a16c-55e6863e38ad" containerID="61c02f6b370efe33bb6d612925773ad663e925c21b28750305aeb6e67f3a6942" exitCode=0 Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.223482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" event={"ID":"add3c875-4992-422d-a16c-55e6863e38ad","Type":"ContainerDied","Data":"61c02f6b370efe33bb6d612925773ad663e925c21b28750305aeb6e67f3a6942"} Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.881709 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.888706 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.920793 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk"] Jan 31 07:39:38 crc kubenswrapper[4826]: E0131 07:39:38.921145 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" containerName="route-controller-manager" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.921171 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" containerName="route-controller-manager" Jan 31 07:39:38 crc kubenswrapper[4826]: E0131 07:39:38.921219 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add3c875-4992-422d-a16c-55e6863e38ad" containerName="controller-manager" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.921232 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="add3c875-4992-422d-a16c-55e6863e38ad" containerName="controller-manager" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.921394 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="add3c875-4992-422d-a16c-55e6863e38ad" containerName="controller-manager" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.921421 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" containerName="route-controller-manager" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.922268 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.929098 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk"] Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933282 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add3c875-4992-422d-a16c-55e6863e38ad-serving-cert\") pod \"add3c875-4992-422d-a16c-55e6863e38ad\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933342 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-proxy-ca-bundles\") pod \"add3c875-4992-422d-a16c-55e6863e38ad\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933386 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-client-ca\") pod \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933414 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9403e838-1048-4b1f-8e19-c26ac04c2d0f-serving-cert\") pod \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933466 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-config\") pod \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933494 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnbjv\" (UniqueName: \"kubernetes.io/projected/add3c875-4992-422d-a16c-55e6863e38ad-kube-api-access-fnbjv\") pod \"add3c875-4992-422d-a16c-55e6863e38ad\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933524 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-config\") pod \"add3c875-4992-422d-a16c-55e6863e38ad\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933552 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-client-ca\") pod \"add3c875-4992-422d-a16c-55e6863e38ad\" (UID: \"add3c875-4992-422d-a16c-55e6863e38ad\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933581 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7d7z\" (UniqueName: \"kubernetes.io/projected/9403e838-1048-4b1f-8e19-c26ac04c2d0f-kube-api-access-q7d7z\") pod \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\" (UID: \"9403e838-1048-4b1f-8e19-c26ac04c2d0f\") " Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933754 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3e5567-db46-4afd-86a9-79238004cb96-serving-cert\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933786 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-config\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-client-ca\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.933843 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcffc\" (UniqueName: \"kubernetes.io/projected/ca3e5567-db46-4afd-86a9-79238004cb96-kube-api-access-gcffc\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.934452 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "add3c875-4992-422d-a16c-55e6863e38ad" (UID: "add3c875-4992-422d-a16c-55e6863e38ad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.935040 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "9403e838-1048-4b1f-8e19-c26ac04c2d0f" (UID: "9403e838-1048-4b1f-8e19-c26ac04c2d0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.938204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-config" (OuterVolumeSpecName: "config") pod "9403e838-1048-4b1f-8e19-c26ac04c2d0f" (UID: "9403e838-1048-4b1f-8e19-c26ac04c2d0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.938401 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "add3c875-4992-422d-a16c-55e6863e38ad" (UID: "add3c875-4992-422d-a16c-55e6863e38ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.939119 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-config" (OuterVolumeSpecName: "config") pod "add3c875-4992-422d-a16c-55e6863e38ad" (UID: "add3c875-4992-422d-a16c-55e6863e38ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.941089 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add3c875-4992-422d-a16c-55e6863e38ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "add3c875-4992-422d-a16c-55e6863e38ad" (UID: "add3c875-4992-422d-a16c-55e6863e38ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.943746 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9403e838-1048-4b1f-8e19-c26ac04c2d0f-kube-api-access-q7d7z" (OuterVolumeSpecName: "kube-api-access-q7d7z") pod "9403e838-1048-4b1f-8e19-c26ac04c2d0f" (UID: "9403e838-1048-4b1f-8e19-c26ac04c2d0f"). InnerVolumeSpecName "kube-api-access-q7d7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.944329 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403e838-1048-4b1f-8e19-c26ac04c2d0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9403e838-1048-4b1f-8e19-c26ac04c2d0f" (UID: "9403e838-1048-4b1f-8e19-c26ac04c2d0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:38 crc kubenswrapper[4826]: I0131 07:39:38.960867 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add3c875-4992-422d-a16c-55e6863e38ad-kube-api-access-fnbjv" (OuterVolumeSpecName: "kube-api-access-fnbjv") pod "add3c875-4992-422d-a16c-55e6863e38ad" (UID: "add3c875-4992-422d-a16c-55e6863e38ad"). InnerVolumeSpecName "kube-api-access-fnbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035120 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3e5567-db46-4afd-86a9-79238004cb96-serving-cert\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035177 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-config\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035206 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-client-ca\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035241 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcffc\" (UniqueName: \"kubernetes.io/projected/ca3e5567-db46-4afd-86a9-79238004cb96-kube-api-access-gcffc\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035294 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add3c875-4992-422d-a16c-55e6863e38ad-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035310 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035325 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035336 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9403e838-1048-4b1f-8e19-c26ac04c2d0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035349 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9403e838-1048-4b1f-8e19-c26ac04c2d0f-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035362 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnbjv\" (UniqueName: \"kubernetes.io/projected/add3c875-4992-422d-a16c-55e6863e38ad-kube-api-access-fnbjv\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035375 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035386 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add3c875-4992-422d-a16c-55e6863e38ad-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.035399 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7d7z\" (UniqueName: \"kubernetes.io/projected/9403e838-1048-4b1f-8e19-c26ac04c2d0f-kube-api-access-q7d7z\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.037236 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-client-ca\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.038569 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-config\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.040193 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3e5567-db46-4afd-86a9-79238004cb96-serving-cert\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.051874 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcffc\" (UniqueName: \"kubernetes.io/projected/ca3e5567-db46-4afd-86a9-79238004cb96-kube-api-access-gcffc\") pod \"route-controller-manager-6c4dd4d5c7-486bk\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.233928 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.235012 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv" event={"ID":"9403e838-1048-4b1f-8e19-c26ac04c2d0f","Type":"ContainerDied","Data":"8f2e827fc5bbface22739b052ec183038370e952c9fe4d7379610fba73bdb87a"} Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.235091 4826 scope.go:117] "RemoveContainer" containerID="b8f4c5b13e34e98bc4b2846b7dd2189097cda48b6259d7d2475d880afe4dd00c" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.237302 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" event={"ID":"add3c875-4992-422d-a16c-55e6863e38ad","Type":"ContainerDied","Data":"062ae6fe99164f247280a252ae6fe3438592a4b9aaaa42bdbf802da7cb80dc31"} Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.237446 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn" Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.283284 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv"] Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.287824 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869ff56f57-q59fv"] Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.297306 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn"] Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.301723 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6999fbfcd8-hj5bn"] Jan 31 07:39:39 crc kubenswrapper[4826]: I0131 07:39:39.302219 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:40 crc kubenswrapper[4826]: I0131 07:39:40.090851 4826 scope.go:117] "RemoveContainer" containerID="61c02f6b370efe33bb6d612925773ad663e925c21b28750305aeb6e67f3a6942" Jan 31 07:39:40 crc kubenswrapper[4826]: I0131 07:39:40.246409 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pp4" event={"ID":"04028e85-fcfa-4463-8279-00c5018bde40","Type":"ContainerStarted","Data":"98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b"} Jan 31 07:39:40 crc kubenswrapper[4826]: I0131 07:39:40.270718 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2pp4" podStartSLOduration=3.900707594 podStartE2EDuration="1m3.270694504s" podCreationTimestamp="2026-01-31 07:38:37 +0000 UTC" firstStartedPulling="2026-01-31 07:38:39.356748877 +0000 UTC m=+151.210635246" lastFinishedPulling="2026-01-31 07:39:38.726735797 +0000 UTC m=+210.580622156" observedRunningTime="2026-01-31 07:39:40.270509938 +0000 UTC m=+212.124396337" watchObservedRunningTime="2026-01-31 07:39:40.270694504 +0000 UTC m=+212.124580873" Jan 31 07:39:40 crc kubenswrapper[4826]: I0131 07:39:40.822575 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9403e838-1048-4b1f-8e19-c26ac04c2d0f" path="/var/lib/kubelet/pods/9403e838-1048-4b1f-8e19-c26ac04c2d0f/volumes" Jan 31 07:39:40 crc kubenswrapper[4826]: I0131 07:39:40.823322 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add3c875-4992-422d-a16c-55e6863e38ad" path="/var/lib/kubelet/pods/add3c875-4992-422d-a16c-55e6863e38ad/volumes" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.740431 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-774b586d9d-gnl2m"] Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.741571 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.745712 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.745737 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.747583 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.748667 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.749111 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.749779 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.759364 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.762711 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774b586d9d-gnl2m"] Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.780213 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-config\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.780380 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-client-ca\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.780433 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8r8\" (UniqueName: \"kubernetes.io/projected/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-kube-api-access-9t8r8\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.780565 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-serving-cert\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.780640 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-proxy-ca-bundles\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.882464 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-config\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.882996 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-client-ca\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.883236 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8r8\" (UniqueName: \"kubernetes.io/projected/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-kube-api-access-9t8r8\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.883532 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-serving-cert\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.883855 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-proxy-ca-bundles\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.884050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-client-ca\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.884243 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-config\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.884781 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-proxy-ca-bundles\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.891549 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-serving-cert\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:41 crc kubenswrapper[4826]: I0131 07:39:41.901454 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8r8\" (UniqueName: \"kubernetes.io/projected/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-kube-api-access-9t8r8\") pod \"controller-manager-774b586d9d-gnl2m\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:42 crc kubenswrapper[4826]: I0131 07:39:42.065846 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:45 crc kubenswrapper[4826]: I0131 07:39:45.304345 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk"] Jan 31 07:39:45 crc kubenswrapper[4826]: W0131 07:39:45.313411 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca3e5567_db46_4afd_86a9_79238004cb96.slice/crio-6f261fcdf5c8a1b8d355e94128ca1c170a911c7a341019500419454fd18c2695 WatchSource:0}: Error finding container 6f261fcdf5c8a1b8d355e94128ca1c170a911c7a341019500419454fd18c2695: Status 404 returned error can't find the container with id 6f261fcdf5c8a1b8d355e94128ca1c170a911c7a341019500419454fd18c2695 Jan 31 07:39:45 crc kubenswrapper[4826]: I0131 07:39:45.420239 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774b586d9d-gnl2m"] Jan 31 07:39:46 crc kubenswrapper[4826]: I0131 07:39:46.308123 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" event={"ID":"42f49a1e-b32a-4e35-8b07-5bb7dd439d92","Type":"ContainerStarted","Data":"246b3bc0c74b8102a8fc2ace72d9914ea7a63db0993dfef3445169230b92d862"} Jan 31 07:39:46 crc kubenswrapper[4826]: I0131 07:39:46.308673 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" event={"ID":"ca3e5567-db46-4afd-86a9-79238004cb96","Type":"ContainerStarted","Data":"6f261fcdf5c8a1b8d355e94128ca1c170a911c7a341019500419454fd18c2695"} Jan 31 07:39:47 crc kubenswrapper[4826]: I0131 07:39:47.316733 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerStarted","Data":"61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214"} Jan 31 07:39:47 crc kubenswrapper[4826]: I0131 07:39:47.458687 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:39:47 crc kubenswrapper[4826]: I0131 07:39:47.458786 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:39:47 crc kubenswrapper[4826]: I0131 07:39:47.507480 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.325612 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerStarted","Data":"66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.327340 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" event={"ID":"42f49a1e-b32a-4e35-8b07-5bb7dd439d92","Type":"ContainerStarted","Data":"471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.329571 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zbk2" event={"ID":"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1","Type":"ContainerStarted","Data":"d5b8d7e01735bd3d0f29ebd3f4f385d8cf91bab86cf5a464750912af78458471"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.332263 4826 generic.go:334] "Generic (PLEG): container finished" podID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerID="61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214" exitCode=0 Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.332350 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerDied","Data":"61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.334473 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerStarted","Data":"1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.335887 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" event={"ID":"ca3e5567-db46-4afd-86a9-79238004cb96","Type":"ContainerStarted","Data":"88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.338692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8854" event={"ID":"f7624cfd-9296-45c5-86a9-2344eb6f976e","Type":"ContainerStarted","Data":"45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.340752 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snchp" event={"ID":"3b28978e-d7a9-41b2-998a-e4a3cd62e236","Type":"ContainerStarted","Data":"f39d1300b6a378904282e1e885ce903627e4774fc14b06bbf17d1975ebce51c6"} Jan 31 07:39:48 crc kubenswrapper[4826]: I0131 07:39:48.381283 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.346033 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.354370 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.368837 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m8854" podStartSLOduration=7.237877622 podStartE2EDuration="1m10.368813614s" podCreationTimestamp="2026-01-31 07:38:39 +0000 UTC" firstStartedPulling="2026-01-31 07:38:41.485554852 +0000 UTC m=+153.339441211" lastFinishedPulling="2026-01-31 07:39:44.616490834 +0000 UTC m=+216.470377203" observedRunningTime="2026-01-31 07:39:49.366107725 +0000 UTC m=+221.219994084" watchObservedRunningTime="2026-01-31 07:39:49.368813614 +0000 UTC m=+221.222699973" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.400452 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" podStartSLOduration=12.400435043 podStartE2EDuration="12.400435043s" podCreationTimestamp="2026-01-31 07:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:49.397983842 +0000 UTC m=+221.251870211" watchObservedRunningTime="2026-01-31 07:39:49.400435043 +0000 UTC m=+221.254321402" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.466960 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" podStartSLOduration=12.466942587 podStartE2EDuration="12.466942587s" podCreationTimestamp="2026-01-31 07:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:49.465086113 +0000 UTC m=+221.318972472" watchObservedRunningTime="2026-01-31 07:39:49.466942587 +0000 UTC m=+221.320828946" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.494487 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snchp" podStartSLOduration=6.959295584 podStartE2EDuration="1m12.494462147s" podCreationTimestamp="2026-01-31 07:38:37 +0000 UTC" firstStartedPulling="2026-01-31 07:38:39.353756181 +0000 UTC m=+151.207642550" lastFinishedPulling="2026-01-31 07:39:44.888922754 +0000 UTC m=+216.742809113" observedRunningTime="2026-01-31 07:39:49.490928724 +0000 UTC m=+221.344815093" watchObservedRunningTime="2026-01-31 07:39:49.494462147 +0000 UTC m=+221.348348506" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.544978 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scztd" podStartSLOduration=8.040740906 podStartE2EDuration="1m9.544941884s" podCreationTimestamp="2026-01-31 07:38:40 +0000 UTC" firstStartedPulling="2026-01-31 07:38:42.595459911 +0000 UTC m=+154.449346270" lastFinishedPulling="2026-01-31 07:39:44.099660889 +0000 UTC m=+215.953547248" observedRunningTime="2026-01-31 07:39:49.542200335 +0000 UTC m=+221.396086694" watchObservedRunningTime="2026-01-31 07:39:49.544941884 +0000 UTC m=+221.398828243" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.577428 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2zbk2" podStartSLOduration=12.862025914 podStartE2EDuration="1m12.577401888s" podCreationTimestamp="2026-01-31 07:38:37 +0000 UTC" firstStartedPulling="2026-01-31 07:38:39.376321933 +0000 UTC m=+151.230208282" lastFinishedPulling="2026-01-31 07:39:39.091697867 +0000 UTC m=+210.945584256" observedRunningTime="2026-01-31 07:39:49.573884896 +0000 UTC m=+221.427771255" watchObservedRunningTime="2026-01-31 07:39:49.577401888 +0000 UTC m=+221.431288247" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.597756 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fhkwh" podStartSLOduration=9.13744812 podStartE2EDuration="1m9.597722989s" podCreationTimestamp="2026-01-31 07:38:40 +0000 UTC" firstStartedPulling="2026-01-31 07:38:43.639174374 +0000 UTC m=+155.493060733" lastFinishedPulling="2026-01-31 07:39:44.099449203 +0000 UTC m=+215.953335602" observedRunningTime="2026-01-31 07:39:49.595813133 +0000 UTC m=+221.449699492" watchObservedRunningTime="2026-01-31 07:39:49.597722989 +0000 UTC m=+221.451609348" Jan 31 07:39:49 crc kubenswrapper[4826]: I0131 07:39:49.948880 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cgwgw"] Jan 31 07:39:50 crc kubenswrapper[4826]: I0131 07:39:50.018506 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:39:50 crc kubenswrapper[4826]: I0131 07:39:50.018678 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:39:50 crc kubenswrapper[4826]: I0131 07:39:50.089844 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:39:50 crc kubenswrapper[4826]: I0131 07:39:50.620743 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:39:50 crc kubenswrapper[4826]: I0131 07:39:50.620800 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:39:51 crc kubenswrapper[4826]: I0131 07:39:51.059766 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:39:51 crc kubenswrapper[4826]: I0131 07:39:51.060249 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:39:51 crc kubenswrapper[4826]: I0131 07:39:51.667206 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-scztd" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="registry-server" probeResult="failure" output=< Jan 31 07:39:51 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 07:39:51 crc kubenswrapper[4826]: > Jan 31 07:39:52 crc kubenswrapper[4826]: I0131 07:39:52.111011 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhkwh" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="registry-server" probeResult="failure" output=< Jan 31 07:39:52 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 07:39:52 crc kubenswrapper[4826]: > Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.367656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerStarted","Data":"c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea"} Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.394492 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-czvwd" podStartSLOduration=3.269654814 podStartE2EDuration="1m16.394466248s" podCreationTimestamp="2026-01-31 07:38:37 +0000 UTC" firstStartedPulling="2026-01-31 07:38:39.386327843 +0000 UTC m=+151.240214202" lastFinishedPulling="2026-01-31 07:39:52.511139277 +0000 UTC m=+224.365025636" observedRunningTime="2026-01-31 07:39:53.390380059 +0000 UTC m=+225.244266418" watchObservedRunningTime="2026-01-31 07:39:53.394466248 +0000 UTC m=+225.248352607" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.709620 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zbk2"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.709892 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2zbk2" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="registry-server" containerID="cri-o://d5b8d7e01735bd3d0f29ebd3f4f385d8cf91bab86cf5a464750912af78458471" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.716726 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snchp"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.716961 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-snchp" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="registry-server" containerID="cri-o://f39d1300b6a378904282e1e885ce903627e4774fc14b06bbf17d1975ebce51c6" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.725945 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czvwd"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.727799 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2pp4"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.728076 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2pp4" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="registry-server" containerID="cri-o://98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.739479 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwd92"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.739749 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" containerID="cri-o://d68cc1c42639c30c4279d193abbadb55121e217532812268bd970883a9451e61" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.753143 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8854"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.753364 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m8854" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="registry-server" containerID="cri-o://45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: E0131 07:39:53.758283 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.761378 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5njx"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.761650 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5njx" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="registry-server" containerID="cri-o://a62b0f68e48181a7ccc6eceb5dcf274388984a9f63c0daeb07d9531f41ea37a3" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: E0131 07:39:53.764742 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 07:39:53 crc kubenswrapper[4826]: E0131 07:39:53.769666 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 07:39:53 crc kubenswrapper[4826]: E0131 07:39:53.769741 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-m8854" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="registry-server" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.769837 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhkwh"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.770077 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fhkwh" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="registry-server" containerID="cri-o://66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.771957 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6562"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.772650 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.779314 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scztd"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.779576 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-scztd" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="registry-server" containerID="cri-o://1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c" gracePeriod=30 Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.790408 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6562"] Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.848232 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mrl\" (UniqueName: \"kubernetes.io/projected/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-kube-api-access-b5mrl\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.848308 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.848341 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.952049 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.952391 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mrl\" (UniqueName: \"kubernetes.io/projected/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-kube-api-access-b5mrl\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.952430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.953499 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.963861 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:53 crc kubenswrapper[4826]: I0131 07:39:53.979322 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mrl\" (UniqueName: \"kubernetes.io/projected/04b26ab3-b358-4fb0-b6ac-8043a19ce1a9-kube-api-access-b5mrl\") pod \"marketplace-operator-79b997595-g6562\" (UID: \"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.108029 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.257804 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.357391 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-utilities\") pod \"04028e85-fcfa-4463-8279-00c5018bde40\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.357795 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-catalog-content\") pod \"04028e85-fcfa-4463-8279-00c5018bde40\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.357883 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfjcp\" (UniqueName: \"kubernetes.io/projected/04028e85-fcfa-4463-8279-00c5018bde40-kube-api-access-zfjcp\") pod \"04028e85-fcfa-4463-8279-00c5018bde40\" (UID: \"04028e85-fcfa-4463-8279-00c5018bde40\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.361609 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-utilities" (OuterVolumeSpecName: "utilities") pod "04028e85-fcfa-4463-8279-00c5018bde40" (UID: "04028e85-fcfa-4463-8279-00c5018bde40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.363304 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04028e85-fcfa-4463-8279-00c5018bde40-kube-api-access-zfjcp" (OuterVolumeSpecName: "kube-api-access-zfjcp") pod "04028e85-fcfa-4463-8279-00c5018bde40" (UID: "04028e85-fcfa-4463-8279-00c5018bde40"). InnerVolumeSpecName "kube-api-access-zfjcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.382307 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerID="d5b8d7e01735bd3d0f29ebd3f4f385d8cf91bab86cf5a464750912af78458471" exitCode=0 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.382412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zbk2" event={"ID":"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1","Type":"ContainerDied","Data":"d5b8d7e01735bd3d0f29ebd3f4f385d8cf91bab86cf5a464750912af78458471"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.389188 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerID="45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720" exitCode=0 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.389265 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8854" event={"ID":"f7624cfd-9296-45c5-86a9-2344eb6f976e","Type":"ContainerDied","Data":"45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.398605 4826 generic.go:334] "Generic (PLEG): container finished" podID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerID="f39d1300b6a378904282e1e885ce903627e4774fc14b06bbf17d1975ebce51c6" exitCode=0 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.398694 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snchp" event={"ID":"3b28978e-d7a9-41b2-998a-e4a3cd62e236","Type":"ContainerDied","Data":"f39d1300b6a378904282e1e885ce903627e4774fc14b06bbf17d1975ebce51c6"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.401944 4826 generic.go:334] "Generic (PLEG): container finished" podID="04028e85-fcfa-4463-8279-00c5018bde40" containerID="98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b" exitCode=0 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.402038 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pp4" event={"ID":"04028e85-fcfa-4463-8279-00c5018bde40","Type":"ContainerDied","Data":"98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.402093 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pp4" event={"ID":"04028e85-fcfa-4463-8279-00c5018bde40","Type":"ContainerDied","Data":"48e7f71d69c24e7a1820e909bec70e8b059d499f3197cdb5c55639147e0c49c8"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.402114 4826 scope.go:117] "RemoveContainer" containerID="98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.402299 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pp4" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.411141 4826 generic.go:334] "Generic (PLEG): container finished" podID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerID="a62b0f68e48181a7ccc6eceb5dcf274388984a9f63c0daeb07d9531f41ea37a3" exitCode=0 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.411208 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5njx" event={"ID":"1fa7f5cc-ef29-4e91-9419-2d49284c4c98","Type":"ContainerDied","Data":"a62b0f68e48181a7ccc6eceb5dcf274388984a9f63c0daeb07d9531f41ea37a3"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.417005 4826 generic.go:334] "Generic (PLEG): container finished" podID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerID="d68cc1c42639c30c4279d193abbadb55121e217532812268bd970883a9451e61" exitCode=0 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.417237 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-czvwd" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="registry-server" containerID="cri-o://c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea" gracePeriod=30 Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.417320 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" event={"ID":"2da6f17d-aeb0-4cc8-8f11-c99bea508129","Type":"ContainerDied","Data":"d68cc1c42639c30c4279d193abbadb55121e217532812268bd970883a9451e61"} Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.433387 4826 scope.go:117] "RemoveContainer" containerID="db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.460990 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfjcp\" (UniqueName: \"kubernetes.io/projected/04028e85-fcfa-4463-8279-00c5018bde40-kube-api-access-zfjcp\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.461049 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.478902 4826 scope.go:117] "RemoveContainer" containerID="f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.493825 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04028e85-fcfa-4463-8279-00c5018bde40" (UID: "04028e85-fcfa-4463-8279-00c5018bde40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.511199 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.549499 4826 scope.go:117] "RemoveContainer" containerID="98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b" Jan 31 07:39:54 crc kubenswrapper[4826]: E0131 07:39:54.552504 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b\": container with ID starting with 98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b not found: ID does not exist" containerID="98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.552556 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b"} err="failed to get container status \"98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b\": rpc error: code = NotFound desc = could not find container \"98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b\": container with ID starting with 98524f8f4b6b364e21487196884b197caae642a0cdadf6b9d4b46577db823f6b not found: ID does not exist" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.552589 4826 scope.go:117] "RemoveContainer" containerID="db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f" Jan 31 07:39:54 crc kubenswrapper[4826]: E0131 07:39:54.554169 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f\": container with ID starting with db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f not found: ID does not exist" containerID="db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.554526 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f"} err="failed to get container status \"db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f\": rpc error: code = NotFound desc = could not find container \"db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f\": container with ID starting with db69cca1e12209a4757286c2c47286ac9575db3986442d01bef4b14a6396c18f not found: ID does not exist" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.555018 4826 scope.go:117] "RemoveContainer" containerID="f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.562448 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04028e85-fcfa-4463-8279-00c5018bde40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: E0131 07:39:54.579129 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c\": container with ID starting with f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c not found: ID does not exist" containerID="f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.579335 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c"} err="failed to get container status \"f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c\": rpc error: code = NotFound desc = could not find container \"f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c\": container with ID starting with f5ca9ebf4e9228c96f4325acead2d096411bceb01ac43edda5facbb3bf364b6c not found: ID does not exist" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.589865 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.592293 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.619098 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665449 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-trusted-ca\") pod \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665486 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5btnw\" (UniqueName: \"kubernetes.io/projected/3b28978e-d7a9-41b2-998a-e4a3cd62e236-kube-api-access-5btnw\") pod \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665524 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-catalog-content\") pod \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665553 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rzm5\" (UniqueName: \"kubernetes.io/projected/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-kube-api-access-5rzm5\") pod \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665570 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-operator-metrics\") pod \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665618 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-catalog-content\") pod \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665664 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-utilities\") pod \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\" (UID: \"3b28978e-d7a9-41b2-998a-e4a3cd62e236\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665684 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-utilities\") pod \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\" (UID: \"1fa7f5cc-ef29-4e91-9419-2d49284c4c98\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.665714 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzwtt\" (UniqueName: \"kubernetes.io/projected/2da6f17d-aeb0-4cc8-8f11-c99bea508129-kube-api-access-lzwtt\") pod \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\" (UID: \"2da6f17d-aeb0-4cc8-8f11-c99bea508129\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.670228 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-utilities" (OuterVolumeSpecName: "utilities") pod "3b28978e-d7a9-41b2-998a-e4a3cd62e236" (UID: "3b28978e-d7a9-41b2-998a-e4a3cd62e236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.670462 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-utilities" (OuterVolumeSpecName: "utilities") pod "1fa7f5cc-ef29-4e91-9419-2d49284c4c98" (UID: "1fa7f5cc-ef29-4e91-9419-2d49284c4c98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.670921 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da6f17d-aeb0-4cc8-8f11-c99bea508129-kube-api-access-lzwtt" (OuterVolumeSpecName: "kube-api-access-lzwtt") pod "2da6f17d-aeb0-4cc8-8f11-c99bea508129" (UID: "2da6f17d-aeb0-4cc8-8f11-c99bea508129"). InnerVolumeSpecName "kube-api-access-lzwtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.671532 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-kube-api-access-5rzm5" (OuterVolumeSpecName: "kube-api-access-5rzm5") pod "1fa7f5cc-ef29-4e91-9419-2d49284c4c98" (UID: "1fa7f5cc-ef29-4e91-9419-2d49284c4c98"). InnerVolumeSpecName "kube-api-access-5rzm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.671823 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2da6f17d-aeb0-4cc8-8f11-c99bea508129" (UID: "2da6f17d-aeb0-4cc8-8f11-c99bea508129"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.690061 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b28978e-d7a9-41b2-998a-e4a3cd62e236-kube-api-access-5btnw" (OuterVolumeSpecName: "kube-api-access-5btnw") pod "3b28978e-d7a9-41b2-998a-e4a3cd62e236" (UID: "3b28978e-d7a9-41b2-998a-e4a3cd62e236"). InnerVolumeSpecName "kube-api-access-5btnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.691478 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2da6f17d-aeb0-4cc8-8f11-c99bea508129" (UID: "2da6f17d-aeb0-4cc8-8f11-c99bea508129"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.712887 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fa7f5cc-ef29-4e91-9419-2d49284c4c98" (UID: "1fa7f5cc-ef29-4e91-9419-2d49284c4c98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.739420 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b28978e-d7a9-41b2-998a-e4a3cd62e236" (UID: "3b28978e-d7a9-41b2-998a-e4a3cd62e236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.741847 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2pp4"] Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.745684 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2pp4"] Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.749910 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766512 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-utilities\") pod \"f7624cfd-9296-45c5-86a9-2344eb6f976e\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766571 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-catalog-content\") pod \"f7624cfd-9296-45c5-86a9-2344eb6f976e\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766608 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcn9h\" (UniqueName: \"kubernetes.io/projected/f7624cfd-9296-45c5-86a9-2344eb6f976e-kube-api-access-tcn9h\") pod \"f7624cfd-9296-45c5-86a9-2344eb6f976e\" (UID: \"f7624cfd-9296-45c5-86a9-2344eb6f976e\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766834 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rzm5\" (UniqueName: \"kubernetes.io/projected/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-kube-api-access-5rzm5\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766846 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766855 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766866 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766874 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fa7f5cc-ef29-4e91-9419-2d49284c4c98-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766882 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzwtt\" (UniqueName: \"kubernetes.io/projected/2da6f17d-aeb0-4cc8-8f11-c99bea508129-kube-api-access-lzwtt\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766893 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2da6f17d-aeb0-4cc8-8f11-c99bea508129-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766902 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5btnw\" (UniqueName: \"kubernetes.io/projected/3b28978e-d7a9-41b2-998a-e4a3cd62e236-kube-api-access-5btnw\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.766913 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b28978e-d7a9-41b2-998a-e4a3cd62e236-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.767754 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-utilities" (OuterVolumeSpecName: "utilities") pod "f7624cfd-9296-45c5-86a9-2344eb6f976e" (UID: "f7624cfd-9296-45c5-86a9-2344eb6f976e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.770483 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7624cfd-9296-45c5-86a9-2344eb6f976e-kube-api-access-tcn9h" (OuterVolumeSpecName: "kube-api-access-tcn9h") pod "f7624cfd-9296-45c5-86a9-2344eb6f976e" (UID: "f7624cfd-9296-45c5-86a9-2344eb6f976e"). InnerVolumeSpecName "kube-api-access-tcn9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.815412 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7624cfd-9296-45c5-86a9-2344eb6f976e" (UID: "f7624cfd-9296-45c5-86a9-2344eb6f976e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.823327 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04028e85-fcfa-4463-8279-00c5018bde40" path="/var/lib/kubelet/pods/04028e85-fcfa-4463-8279-00c5018bde40/volumes" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.870209 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2mz\" (UniqueName: \"kubernetes.io/projected/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-kube-api-access-8f2mz\") pod \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.870657 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-catalog-content\") pod \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.870708 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-utilities\") pod \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\" (UID: \"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1\") " Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.871068 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.871080 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7624cfd-9296-45c5-86a9-2344eb6f976e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.871090 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcn9h\" (UniqueName: \"kubernetes.io/projected/f7624cfd-9296-45c5-86a9-2344eb6f976e-kube-api-access-tcn9h\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.877872 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-kube-api-access-8f2mz" (OuterVolumeSpecName: "kube-api-access-8f2mz") pod "ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" (UID: "ad75cbef-01f1-46ef-bfe2-d1e864e2efe1"). InnerVolumeSpecName "kube-api-access-8f2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.877985 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-utilities" (OuterVolumeSpecName: "utilities") pod "ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" (UID: "ad75cbef-01f1-46ef-bfe2-d1e864e2efe1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.972156 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f2mz\" (UniqueName: \"kubernetes.io/projected/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-kube-api-access-8f2mz\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.972194 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:54 crc kubenswrapper[4826]: I0131 07:39:54.982536 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" (UID: "ad75cbef-01f1-46ef-bfe2-d1e864e2efe1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.060269 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6562"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.073742 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.102936 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-czvwd_0108e13b-6622-4b3c-a0b3-7e91572001aa/registry-server/0.log" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.104544 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.174689 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-utilities\") pod \"0108e13b-6622-4b3c-a0b3-7e91572001aa\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.174736 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-catalog-content\") pod \"0108e13b-6622-4b3c-a0b3-7e91572001aa\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.174768 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b52hz\" (UniqueName: \"kubernetes.io/projected/0108e13b-6622-4b3c-a0b3-7e91572001aa-kube-api-access-b52hz\") pod \"0108e13b-6622-4b3c-a0b3-7e91572001aa\" (UID: \"0108e13b-6622-4b3c-a0b3-7e91572001aa\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.176301 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-utilities" (OuterVolumeSpecName: "utilities") pod "0108e13b-6622-4b3c-a0b3-7e91572001aa" (UID: "0108e13b-6622-4b3c-a0b3-7e91572001aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.186149 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0108e13b-6622-4b3c-a0b3-7e91572001aa-kube-api-access-b52hz" (OuterVolumeSpecName: "kube-api-access-b52hz") pod "0108e13b-6622-4b3c-a0b3-7e91572001aa" (UID: "0108e13b-6622-4b3c-a0b3-7e91572001aa"). InnerVolumeSpecName "kube-api-access-b52hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.196539 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.252686 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0108e13b-6622-4b3c-a0b3-7e91572001aa" (UID: "0108e13b-6622-4b3c-a0b3-7e91572001aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.276309 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmtfb\" (UniqueName: \"kubernetes.io/projected/f7510fe8-16e2-4641-8d16-18b8a1387106-kube-api-access-kmtfb\") pod \"f7510fe8-16e2-4641-8d16-18b8a1387106\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.276406 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-catalog-content\") pod \"f7510fe8-16e2-4641-8d16-18b8a1387106\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.276462 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-utilities\") pod \"f7510fe8-16e2-4641-8d16-18b8a1387106\" (UID: \"f7510fe8-16e2-4641-8d16-18b8a1387106\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.276716 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.276759 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0108e13b-6622-4b3c-a0b3-7e91572001aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.276772 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b52hz\" (UniqueName: \"kubernetes.io/projected/0108e13b-6622-4b3c-a0b3-7e91572001aa-kube-api-access-b52hz\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.277504 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-utilities" (OuterVolumeSpecName: "utilities") pod "f7510fe8-16e2-4641-8d16-18b8a1387106" (UID: "f7510fe8-16e2-4641-8d16-18b8a1387106"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.280735 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7510fe8-16e2-4641-8d16-18b8a1387106-kube-api-access-kmtfb" (OuterVolumeSpecName: "kube-api-access-kmtfb") pod "f7510fe8-16e2-4641-8d16-18b8a1387106" (UID: "f7510fe8-16e2-4641-8d16-18b8a1387106"). InnerVolumeSpecName "kube-api-access-kmtfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.338740 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.381584 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.381621 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmtfb\" (UniqueName: \"kubernetes.io/projected/f7510fe8-16e2-4641-8d16-18b8a1387106-kube-api-access-kmtfb\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.406743 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7510fe8-16e2-4641-8d16-18b8a1387106" (UID: "f7510fe8-16e2-4641-8d16-18b8a1387106"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.424403 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" event={"ID":"2da6f17d-aeb0-4cc8-8f11-c99bea508129","Type":"ContainerDied","Data":"3544b17084b415020b550be7c0586501979e549f5bee8dafb6e7422c8187c769"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.424436 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qwd92" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.424460 4826 scope.go:117] "RemoveContainer" containerID="d68cc1c42639c30c4279d193abbadb55121e217532812268bd970883a9451e61" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.426042 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-czvwd_0108e13b-6622-4b3c-a0b3-7e91572001aa/registry-server/0.log" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.426839 4826 generic.go:334] "Generic (PLEG): container finished" podID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerID="c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea" exitCode=1 Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.426882 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czvwd" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.426870 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerDied","Data":"c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.426954 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czvwd" event={"ID":"0108e13b-6622-4b3c-a0b3-7e91572001aa","Type":"ContainerDied","Data":"47ede3009e6008727f974352505c1a39bfac75aa4e8166b08c88a0bc65a03426"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.430580 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerID="1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c" exitCode=0 Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.430641 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerDied","Data":"1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.430664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scztd" event={"ID":"f7510fe8-16e2-4641-8d16-18b8a1387106","Type":"ContainerDied","Data":"8cf0f7852f41ed42987cd0551d148a045aed15d56228a3ea043a283da48fad2c"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.431445 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scztd" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.438413 4826 generic.go:334] "Generic (PLEG): container finished" podID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerID="66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045" exitCode=0 Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.438461 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerDied","Data":"66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.438477 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhkwh" event={"ID":"2f3a7050-24a4-4f99-ac44-c822d68d5ba5","Type":"ContainerDied","Data":"3ff6611c2538c2e47fffecb92d713e87bf58f5bb61b0352b995e5474e57b5328"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.438501 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhkwh" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.441126 4826 scope.go:117] "RemoveContainer" containerID="c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.441413 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" event={"ID":"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9","Type":"ContainerStarted","Data":"7f1342ada0a1b313f809a3b7f17b6d75f2edbf04098a2b2134981a305e009492"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.441430 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" event={"ID":"04b26ab3-b358-4fb0-b6ac-8043a19ce1a9","Type":"ContainerStarted","Data":"abd08f2c398f33504a2614ffc13c06f8f0c0b51d0bbe8c272097c59c3d31d63c"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.443455 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.443527 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g6562 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.443557 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" podUID="04b26ab3-b358-4fb0-b6ac-8043a19ce1a9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.448459 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zbk2" event={"ID":"ad75cbef-01f1-46ef-bfe2-d1e864e2efe1","Type":"ContainerDied","Data":"e0dcd743031ded61bfaf86fb7efb1b3a9cd5fcf17196d0f5b475e4e04a809745"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.448543 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zbk2" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.457409 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwd92"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.463889 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qwd92"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.464099 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8854" event={"ID":"f7624cfd-9296-45c5-86a9-2344eb6f976e","Type":"ContainerDied","Data":"a839334c9da6441c321854f37d624bdcfc88d9779052529309cb593e86877787"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.464149 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8854" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.466143 4826 scope.go:117] "RemoveContainer" containerID="61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.466615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snchp" event={"ID":"3b28978e-d7a9-41b2-998a-e4a3cd62e236","Type":"ContainerDied","Data":"c3bfe2f62d36dbbbdbbd1a0852fd101d9d28b339f64cf0ea025ac0c62a9efd2a"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.466766 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snchp" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.479173 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5njx" event={"ID":"1fa7f5cc-ef29-4e91-9419-2d49284c4c98","Type":"ContainerDied","Data":"9a7afd42458735c9bb6de2fd09d57e2b0b1beb8b2f475cf8e092b0ea4ef54929"} Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.479235 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5njx" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.488262 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-catalog-content\") pod \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.488600 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-utilities\") pod \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.488838 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jlqb\" (UniqueName: \"kubernetes.io/projected/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-kube-api-access-2jlqb\") pod \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\" (UID: \"2f3a7050-24a4-4f99-ac44-c822d68d5ba5\") " Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.489959 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7510fe8-16e2-4641-8d16-18b8a1387106-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.490508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-utilities" (OuterVolumeSpecName: "utilities") pod "2f3a7050-24a4-4f99-ac44-c822d68d5ba5" (UID: "2f3a7050-24a4-4f99-ac44-c822d68d5ba5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.491886 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" podStartSLOduration=2.4918510019999998 podStartE2EDuration="2.491851002s" podCreationTimestamp="2026-01-31 07:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:55.48075139 +0000 UTC m=+227.334637749" watchObservedRunningTime="2026-01-31 07:39:55.491851002 +0000 UTC m=+227.345737371" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.500893 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-kube-api-access-2jlqb" (OuterVolumeSpecName: "kube-api-access-2jlqb") pod "2f3a7050-24a4-4f99-ac44-c822d68d5ba5" (UID: "2f3a7050-24a4-4f99-ac44-c822d68d5ba5"). InnerVolumeSpecName "kube-api-access-2jlqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.508921 4826 scope.go:117] "RemoveContainer" containerID="ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.525259 4826 scope.go:117] "RemoveContainer" containerID="c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.525789 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea\": container with ID starting with c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea not found: ID does not exist" containerID="c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.525857 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea"} err="failed to get container status \"c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea\": rpc error: code = NotFound desc = could not find container \"c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea\": container with ID starting with c5b9d5752fdcf3c4a36972cca36ffb880b949a074977be9a2783f5290aebeeea not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.525892 4826 scope.go:117] "RemoveContainer" containerID="61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.526250 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214\": container with ID starting with 61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214 not found: ID does not exist" containerID="61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.526298 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214"} err="failed to get container status \"61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214\": rpc error: code = NotFound desc = could not find container \"61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214\": container with ID starting with 61acab3cda1649571b339dc94617931ea3ee4789befb046086db14a755264214 not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.526314 4826 scope.go:117] "RemoveContainer" containerID="ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.526500 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913\": container with ID starting with ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913 not found: ID does not exist" containerID="ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.526537 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913"} err="failed to get container status \"ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913\": rpc error: code = NotFound desc = could not find container \"ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913\": container with ID starting with ed200a2e71b510349208257edfe34596013154d84a6f91c51149dfd2432e2913 not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.526554 4826 scope.go:117] "RemoveContainer" containerID="1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.531533 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czvwd"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.534442 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-czvwd"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.538046 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8854"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.546780 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8854"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.548902 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snchp"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.552539 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-snchp"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.553934 4826 scope.go:117] "RemoveContainer" containerID="53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.555141 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5njx"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.557733 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5njx"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.565198 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scztd"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.573679 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-scztd"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.578668 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zbk2"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.580898 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2zbk2"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.584424 4826 scope.go:117] "RemoveContainer" containerID="3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.591462 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jlqb\" (UniqueName: \"kubernetes.io/projected/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-kube-api-access-2jlqb\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.591548 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.607018 4826 scope.go:117] "RemoveContainer" containerID="1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.607739 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c\": container with ID starting with 1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c not found: ID does not exist" containerID="1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.607818 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c"} err="failed to get container status \"1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c\": rpc error: code = NotFound desc = could not find container \"1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c\": container with ID starting with 1e428d15b68c4ce9285e7cfb2d559df32b1fc49584a460d049500e46fa47da0c not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.607864 4826 scope.go:117] "RemoveContainer" containerID="53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.608348 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2\": container with ID starting with 53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2 not found: ID does not exist" containerID="53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.608387 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2"} err="failed to get container status \"53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2\": rpc error: code = NotFound desc = could not find container \"53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2\": container with ID starting with 53255c089fc524b4cd55a25ee640dcd8eda387a8a79126ab60d1687b551aead2 not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.608408 4826 scope.go:117] "RemoveContainer" containerID="3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.608727 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4\": container with ID starting with 3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4 not found: ID does not exist" containerID="3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.608767 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4"} err="failed to get container status \"3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4\": rpc error: code = NotFound desc = could not find container \"3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4\": container with ID starting with 3f796a40f8404a4e3cfd1a42a11c47ac3af708f2aa97f4788e06baf2fcac24a4 not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.608800 4826 scope.go:117] "RemoveContainer" containerID="66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.625235 4826 scope.go:117] "RemoveContainer" containerID="0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.645044 4826 scope.go:117] "RemoveContainer" containerID="ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.664579 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f3a7050-24a4-4f99-ac44-c822d68d5ba5" (UID: "2f3a7050-24a4-4f99-ac44-c822d68d5ba5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.666305 4826 scope.go:117] "RemoveContainer" containerID="66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.667119 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045\": container with ID starting with 66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045 not found: ID does not exist" containerID="66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.667152 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045"} err="failed to get container status \"66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045\": rpc error: code = NotFound desc = could not find container \"66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045\": container with ID starting with 66fa5766f2b5f96e35444341e343cfd7074b57119e39129c8a5e487837d19045 not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.667179 4826 scope.go:117] "RemoveContainer" containerID="0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.667373 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b\": container with ID starting with 0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b not found: ID does not exist" containerID="0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.667393 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b"} err="failed to get container status \"0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b\": rpc error: code = NotFound desc = could not find container \"0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b\": container with ID starting with 0b72d9c97c04bcc9cf68167f12fc21b2c8b2c96a689bca6a809a82beb6ab350b not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.667405 4826 scope.go:117] "RemoveContainer" containerID="ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.667798 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb\": container with ID starting with ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb not found: ID does not exist" containerID="ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.667817 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb"} err="failed to get container status \"ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb\": rpc error: code = NotFound desc = could not find container \"ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb\": container with ID starting with ec7356c327725897efdd4b63db1e27506bf1f4244d2b6ce0a597e4df050c5abb not found: ID does not exist" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.667828 4826 scope.go:117] "RemoveContainer" containerID="d5b8d7e01735bd3d0f29ebd3f4f385d8cf91bab86cf5a464750912af78458471" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.689958 4826 scope.go:117] "RemoveContainer" containerID="56cd252c9eb6b901c27ffba98e7b92042293c81abf283dd2177815c5078fc3f8" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.692363 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a7050-24a4-4f99-ac44-c822d68d5ba5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.706685 4826 scope.go:117] "RemoveContainer" containerID="ffa59a7dc46d2f742ef8798c0a8d379ec8d33e0bbf91efd7874b33f5caa82385" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.726722 4826 scope.go:117] "RemoveContainer" containerID="45c996872020f991502aa371177b68f4812d510e8c45aee8933058f225e72720" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.748854 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lh7qz"] Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751822 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751854 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751868 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751879 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751887 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751895 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751907 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751915 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751924 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751931 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751941 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751949 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751959 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751980 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.751991 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.751998 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752009 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752016 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752026 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752033 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752043 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752052 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752060 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752069 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752082 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752091 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752100 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752108 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752119 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752127 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752138 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752146 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752159 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752166 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752176 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752183 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752192 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752199 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752211 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752218 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752228 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752236 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752246 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752253 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752266 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752274 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752284 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752291 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="extract-utilities" Jan 31 07:39:55 crc kubenswrapper[4826]: E0131 07:39:55.752299 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752307 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="extract-content" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752416 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752427 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752435 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752445 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752453 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752465 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752475 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" containerName="marketplace-operator" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752483 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="04028e85-fcfa-4463-8279-00c5018bde40" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.752494 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" containerName="registry-server" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.755884 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.760635 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.761212 4826 scope.go:117] "RemoveContainer" containerID="9763930600d59c722fe0a05e8f2d73ec4ca81a0af4b7571dfdf6ff2cbdd67d14" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.762308 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh7qz"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.798620 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhkwh"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.799791 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fhkwh"] Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.800999 4826 scope.go:117] "RemoveContainer" containerID="8a731821293f0187cb4e974d8d114454e590d0904f508b60ff06f97797f2a686" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.819682 4826 scope.go:117] "RemoveContainer" containerID="f39d1300b6a378904282e1e885ce903627e4774fc14b06bbf17d1975ebce51c6" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.858309 4826 scope.go:117] "RemoveContainer" containerID="d44e0210688039b02a34f3fed7d1bf9d640bd0aefd462112d88a44c2ce93f0c8" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.872878 4826 scope.go:117] "RemoveContainer" containerID="180b6159babb93b7451188e48859792fb00a5ca092d5d6007c74fd8f0cb1bb20" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.889492 4826 scope.go:117] "RemoveContainer" containerID="a62b0f68e48181a7ccc6eceb5dcf274388984a9f63c0daeb07d9531f41ea37a3" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.895199 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsf2\" (UniqueName: \"kubernetes.io/projected/522ba915-3cf7-4e84-8ada-eae39676ac2b-kube-api-access-ldsf2\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.895510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522ba915-3cf7-4e84-8ada-eae39676ac2b-catalog-content\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.895668 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522ba915-3cf7-4e84-8ada-eae39676ac2b-utilities\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.905839 4826 scope.go:117] "RemoveContainer" containerID="759a80d9ceaccc242155d6262914bf92debcdfdc824f1fc721b2ac080c349018" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.917786 4826 scope.go:117] "RemoveContainer" containerID="4d51e1dafeb2852e7594de977f785507e83a0db62faec389722932621c37f67b" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.996827 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsf2\" (UniqueName: \"kubernetes.io/projected/522ba915-3cf7-4e84-8ada-eae39676ac2b-kube-api-access-ldsf2\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.996896 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522ba915-3cf7-4e84-8ada-eae39676ac2b-catalog-content\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.996959 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522ba915-3cf7-4e84-8ada-eae39676ac2b-utilities\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.997680 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522ba915-3cf7-4e84-8ada-eae39676ac2b-utilities\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:55 crc kubenswrapper[4826]: I0131 07:39:55.997782 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522ba915-3cf7-4e84-8ada-eae39676ac2b-catalog-content\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.012730 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsf2\" (UniqueName: \"kubernetes.io/projected/522ba915-3cf7-4e84-8ada-eae39676ac2b-kube-api-access-ldsf2\") pod \"community-operators-lh7qz\" (UID: \"522ba915-3cf7-4e84-8ada-eae39676ac2b\") " pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.087597 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.496339 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh7qz"] Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.504998 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g6562" Jan 31 07:39:56 crc kubenswrapper[4826]: W0131 07:39:56.507307 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod522ba915_3cf7_4e84_8ada_eae39676ac2b.slice/crio-5b5b956f71e59775265f5b3871876a3ca04a633ca88df4965ad477123997c029 WatchSource:0}: Error finding container 5b5b956f71e59775265f5b3871876a3ca04a633ca88df4965ad477123997c029: Status 404 returned error can't find the container with id 5b5b956f71e59775265f5b3871876a3ca04a633ca88df4965ad477123997c029 Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.816620 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0108e13b-6622-4b3c-a0b3-7e91572001aa" path="/var/lib/kubelet/pods/0108e13b-6622-4b3c-a0b3-7e91572001aa/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.817538 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa7f5cc-ef29-4e91-9419-2d49284c4c98" path="/var/lib/kubelet/pods/1fa7f5cc-ef29-4e91-9419-2d49284c4c98/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.818242 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da6f17d-aeb0-4cc8-8f11-c99bea508129" path="/var/lib/kubelet/pods/2da6f17d-aeb0-4cc8-8f11-c99bea508129/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.819408 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3a7050-24a4-4f99-ac44-c822d68d5ba5" path="/var/lib/kubelet/pods/2f3a7050-24a4-4f99-ac44-c822d68d5ba5/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.820028 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b28978e-d7a9-41b2-998a-e4a3cd62e236" path="/var/lib/kubelet/pods/3b28978e-d7a9-41b2-998a-e4a3cd62e236/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.821219 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad75cbef-01f1-46ef-bfe2-d1e864e2efe1" path="/var/lib/kubelet/pods/ad75cbef-01f1-46ef-bfe2-d1e864e2efe1/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.821829 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7510fe8-16e2-4641-8d16-18b8a1387106" path="/var/lib/kubelet/pods/f7510fe8-16e2-4641-8d16-18b8a1387106/volumes" Jan 31 07:39:56 crc kubenswrapper[4826]: I0131 07:39:56.822523 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7624cfd-9296-45c5-86a9-2344eb6f976e" path="/var/lib/kubelet/pods/f7624cfd-9296-45c5-86a9-2344eb6f976e/volumes" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.376889 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.376990 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.377046 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.377657 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.377722 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37" gracePeriod=600 Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.418027 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-774b586d9d-gnl2m"] Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.418317 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" podUID="42f49a1e-b32a-4e35-8b07-5bb7dd439d92" containerName="controller-manager" containerID="cri-o://471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1" gracePeriod=30 Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.506718 4826 generic.go:334] "Generic (PLEG): container finished" podID="522ba915-3cf7-4e84-8ada-eae39676ac2b" containerID="999650c581d4781167799b0b16ae1a3657cd75873335f0b81dad205b00c651e8" exitCode=0 Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.506792 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh7qz" event={"ID":"522ba915-3cf7-4e84-8ada-eae39676ac2b","Type":"ContainerDied","Data":"999650c581d4781167799b0b16ae1a3657cd75873335f0b81dad205b00c651e8"} Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.506834 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh7qz" event={"ID":"522ba915-3cf7-4e84-8ada-eae39676ac2b","Type":"ContainerStarted","Data":"5b5b956f71e59775265f5b3871876a3ca04a633ca88df4965ad477123997c029"} Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.516280 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk"] Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.516522 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" podUID="ca3e5567-db46-4afd-86a9-79238004cb96" containerName="route-controller-manager" containerID="cri-o://88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3" gracePeriod=30 Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.517035 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.523680 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.953046 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b6k2p"] Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.954583 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.958615 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 07:39:57 crc kubenswrapper[4826]: I0131 07:39:57.967443 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6k2p"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.025465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxzhm\" (UniqueName: \"kubernetes.io/projected/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-kube-api-access-xxzhm\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.025515 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-utilities\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.025561 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-catalog-content\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.037932 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.126838 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcffc\" (UniqueName: \"kubernetes.io/projected/ca3e5567-db46-4afd-86a9-79238004cb96-kube-api-access-gcffc\") pod \"ca3e5567-db46-4afd-86a9-79238004cb96\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.126872 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-config\") pod \"ca3e5567-db46-4afd-86a9-79238004cb96\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.127026 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3e5567-db46-4afd-86a9-79238004cb96-serving-cert\") pod \"ca3e5567-db46-4afd-86a9-79238004cb96\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.127045 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-client-ca\") pod \"ca3e5567-db46-4afd-86a9-79238004cb96\" (UID: \"ca3e5567-db46-4afd-86a9-79238004cb96\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.127222 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxzhm\" (UniqueName: \"kubernetes.io/projected/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-kube-api-access-xxzhm\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.127247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-utilities\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.127277 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-catalog-content\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.128108 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-catalog-content\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.128353 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-utilities\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.128725 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca3e5567-db46-4afd-86a9-79238004cb96" (UID: "ca3e5567-db46-4afd-86a9-79238004cb96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.128792 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-config" (OuterVolumeSpecName: "config") pod "ca3e5567-db46-4afd-86a9-79238004cb96" (UID: "ca3e5567-db46-4afd-86a9-79238004cb96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.140648 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3e5567-db46-4afd-86a9-79238004cb96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca3e5567-db46-4afd-86a9-79238004cb96" (UID: "ca3e5567-db46-4afd-86a9-79238004cb96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.142222 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3e5567-db46-4afd-86a9-79238004cb96-kube-api-access-gcffc" (OuterVolumeSpecName: "kube-api-access-gcffc") pod "ca3e5567-db46-4afd-86a9-79238004cb96" (UID: "ca3e5567-db46-4afd-86a9-79238004cb96"). InnerVolumeSpecName "kube-api-access-gcffc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.150143 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxzhm\" (UniqueName: \"kubernetes.io/projected/2afad1ec-bc9f-48e1-9e8a-399fbde8bc28-kube-api-access-xxzhm\") pod \"redhat-marketplace-b6k2p\" (UID: \"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28\") " pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.151618 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nghjp"] Jan 31 07:39:58 crc kubenswrapper[4826]: E0131 07:39:58.152872 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3e5567-db46-4afd-86a9-79238004cb96" containerName="route-controller-manager" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.152898 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3e5567-db46-4afd-86a9-79238004cb96" containerName="route-controller-manager" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.157130 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3e5567-db46-4afd-86a9-79238004cb96" containerName="route-controller-manager" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.158329 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.161590 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.163028 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nghjp"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229006 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmzn\" (UniqueName: \"kubernetes.io/projected/5cf2e4f6-5f98-449a-a380-946edc1521f1-kube-api-access-jvmzn\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229302 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf2e4f6-5f98-449a-a380-946edc1521f1-catalog-content\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229429 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf2e4f6-5f98-449a-a380-946edc1521f1-utilities\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229502 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3e5567-db46-4afd-86a9-79238004cb96-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229528 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229541 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcffc\" (UniqueName: \"kubernetes.io/projected/ca3e5567-db46-4afd-86a9-79238004cb96-kube-api-access-gcffc\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.229557 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3e5567-db46-4afd-86a9-79238004cb96-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.278004 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.331192 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf2e4f6-5f98-449a-a380-946edc1521f1-catalog-content\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.331293 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf2e4f6-5f98-449a-a380-946edc1521f1-utilities\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.331396 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmzn\" (UniqueName: \"kubernetes.io/projected/5cf2e4f6-5f98-449a-a380-946edc1521f1-kube-api-access-jvmzn\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.332232 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf2e4f6-5f98-449a-a380-946edc1521f1-catalog-content\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.332529 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf2e4f6-5f98-449a-a380-946edc1521f1-utilities\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.351026 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmzn\" (UniqueName: \"kubernetes.io/projected/5cf2e4f6-5f98-449a-a380-946edc1521f1-kube-api-access-jvmzn\") pod \"redhat-operators-nghjp\" (UID: \"5cf2e4f6-5f98-449a-a380-946edc1521f1\") " pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.472228 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.494371 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.532120 4826 generic.go:334] "Generic (PLEG): container finished" podID="ca3e5567-db46-4afd-86a9-79238004cb96" containerID="88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3" exitCode=0 Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.532224 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.532259 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" event={"ID":"ca3e5567-db46-4afd-86a9-79238004cb96","Type":"ContainerDied","Data":"88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3"} Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.532300 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk" event={"ID":"ca3e5567-db46-4afd-86a9-79238004cb96","Type":"ContainerDied","Data":"6f261fcdf5c8a1b8d355e94128ca1c170a911c7a341019500419454fd18c2695"} Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.532320 4826 scope.go:117] "RemoveContainer" containerID="88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.533661 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-config\") pod \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.533770 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-proxy-ca-bundles\") pod \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.533831 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t8r8\" (UniqueName: \"kubernetes.io/projected/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-kube-api-access-9t8r8\") pod \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.533919 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-client-ca\") pod \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.533956 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-serving-cert\") pod \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\" (UID: \"42f49a1e-b32a-4e35-8b07-5bb7dd439d92\") " Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.534861 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-config" (OuterVolumeSpecName: "config") pod "42f49a1e-b32a-4e35-8b07-5bb7dd439d92" (UID: "42f49a1e-b32a-4e35-8b07-5bb7dd439d92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.536236 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "42f49a1e-b32a-4e35-8b07-5bb7dd439d92" (UID: "42f49a1e-b32a-4e35-8b07-5bb7dd439d92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.537162 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-client-ca" (OuterVolumeSpecName: "client-ca") pod "42f49a1e-b32a-4e35-8b07-5bb7dd439d92" (UID: "42f49a1e-b32a-4e35-8b07-5bb7dd439d92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.548825 4826 generic.go:334] "Generic (PLEG): container finished" podID="42f49a1e-b32a-4e35-8b07-5bb7dd439d92" containerID="471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1" exitCode=0 Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.548924 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" event={"ID":"42f49a1e-b32a-4e35-8b07-5bb7dd439d92","Type":"ContainerDied","Data":"471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1"} Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.548958 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" event={"ID":"42f49a1e-b32a-4e35-8b07-5bb7dd439d92","Type":"ContainerDied","Data":"246b3bc0c74b8102a8fc2ace72d9914ea7a63db0993dfef3445169230b92d862"} Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.549045 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774b586d9d-gnl2m" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.551435 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42f49a1e-b32a-4e35-8b07-5bb7dd439d92" (UID: "42f49a1e-b32a-4e35-8b07-5bb7dd439d92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.563479 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-kube-api-access-9t8r8" (OuterVolumeSpecName: "kube-api-access-9t8r8") pod "42f49a1e-b32a-4e35-8b07-5bb7dd439d92" (UID: "42f49a1e-b32a-4e35-8b07-5bb7dd439d92"). InnerVolumeSpecName "kube-api-access-9t8r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.570555 4826 scope.go:117] "RemoveContainer" containerID="88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3" Jan 31 07:39:58 crc kubenswrapper[4826]: E0131 07:39:58.572727 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3\": container with ID starting with 88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3 not found: ID does not exist" containerID="88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.572771 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3"} err="failed to get container status \"88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3\": rpc error: code = NotFound desc = could not find container \"88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3\": container with ID starting with 88350c9302c4c197e219539d7e06f7dcd330b2e81c129501b7878d6333cfd6e3 not found: ID does not exist" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.572802 4826 scope.go:117] "RemoveContainer" containerID="471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.574525 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37" exitCode=0 Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.574619 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37"} Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.574656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"f565241d95f4e9819f53c9108aab47ac5428c731551db742e200ccf23cd4ae76"} Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.614570 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.614627 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c4dd4d5c7-486bk"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.620224 4826 scope.go:117] "RemoveContainer" containerID="471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1" Jan 31 07:39:58 crc kubenswrapper[4826]: E0131 07:39:58.622168 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1\": container with ID starting with 471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1 not found: ID does not exist" containerID="471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.622211 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1"} err="failed to get container status \"471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1\": rpc error: code = NotFound desc = could not find container \"471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1\": container with ID starting with 471c0642e44125ea2e8e8fd85b2f2118e379b3fc634fb2f3c75d7b84fa2f8fe1 not found: ID does not exist" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.635404 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.635827 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t8r8\" (UniqueName: \"kubernetes.io/projected/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-kube-api-access-9t8r8\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.635843 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.635855 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.635867 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f49a1e-b32a-4e35-8b07-5bb7dd439d92-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.749641 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n"] Jan 31 07:39:58 crc kubenswrapper[4826]: E0131 07:39:58.750014 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f49a1e-b32a-4e35-8b07-5bb7dd439d92" containerName="controller-manager" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.750029 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f49a1e-b32a-4e35-8b07-5bb7dd439d92" containerName="controller-manager" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.750152 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f49a1e-b32a-4e35-8b07-5bb7dd439d92" containerName="controller-manager" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.750582 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.753487 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.754347 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.756364 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.758386 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.758654 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.758898 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.759320 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.759505 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.760413 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.763379 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.820452 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3e5567-db46-4afd-86a9-79238004cb96" path="/var/lib/kubelet/pods/ca3e5567-db46-4afd-86a9-79238004cb96/volumes" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.821745 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6k2p"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.840256 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkxl\" (UniqueName: \"kubernetes.io/projected/5931b10d-2194-4b94-9ace-2e92f37be85b-kube-api-access-5tkxl\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.840601 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frzt\" (UniqueName: \"kubernetes.io/projected/eaa327d7-8125-4e71-a07a-e905810d25b1-kube-api-access-8frzt\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.840807 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa327d7-8125-4e71-a07a-e905810d25b1-client-ca\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.840959 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-proxy-ca-bundles\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.841126 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5931b10d-2194-4b94-9ace-2e92f37be85b-serving-cert\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.841285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-client-ca\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.841401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa327d7-8125-4e71-a07a-e905810d25b1-serving-cert\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.841540 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-config\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.841676 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa327d7-8125-4e71-a07a-e905810d25b1-config\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.903468 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-774b586d9d-gnl2m"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.907662 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-774b586d9d-gnl2m"] Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943679 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tkxl\" (UniqueName: \"kubernetes.io/projected/5931b10d-2194-4b94-9ace-2e92f37be85b-kube-api-access-5tkxl\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943731 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frzt\" (UniqueName: \"kubernetes.io/projected/eaa327d7-8125-4e71-a07a-e905810d25b1-kube-api-access-8frzt\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943763 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa327d7-8125-4e71-a07a-e905810d25b1-client-ca\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943787 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-proxy-ca-bundles\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943830 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5931b10d-2194-4b94-9ace-2e92f37be85b-serving-cert\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943857 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa327d7-8125-4e71-a07a-e905810d25b1-serving-cert\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943872 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-client-ca\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-config\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.943927 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa327d7-8125-4e71-a07a-e905810d25b1-config\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.945396 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa327d7-8125-4e71-a07a-e905810d25b1-config\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.945471 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-client-ca\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.945687 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-config\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.946100 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa327d7-8125-4e71-a07a-e905810d25b1-client-ca\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.946534 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5931b10d-2194-4b94-9ace-2e92f37be85b-proxy-ca-bundles\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.955772 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5931b10d-2194-4b94-9ace-2e92f37be85b-serving-cert\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.956213 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa327d7-8125-4e71-a07a-e905810d25b1-serving-cert\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.976120 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frzt\" (UniqueName: \"kubernetes.io/projected/eaa327d7-8125-4e71-a07a-e905810d25b1-kube-api-access-8frzt\") pod \"route-controller-manager-6bc5b5bb7d-7vf5w\" (UID: \"eaa327d7-8125-4e71-a07a-e905810d25b1\") " pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.978588 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tkxl\" (UniqueName: \"kubernetes.io/projected/5931b10d-2194-4b94-9ace-2e92f37be85b-kube-api-access-5tkxl\") pod \"controller-manager-6bff9dc65d-mbw5n\" (UID: \"5931b10d-2194-4b94-9ace-2e92f37be85b\") " pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:58 crc kubenswrapper[4826]: I0131 07:39:58.986629 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nghjp"] Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.078010 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.092703 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.264547 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n"] Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.328053 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w"] Jan 31 07:39:59 crc kubenswrapper[4826]: W0131 07:39:59.357097 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa327d7_8125_4e71_a07a_e905810d25b1.slice/crio-faf9678a56ea4f83cbea1dfbd71098cad7dab56da4ca3413d4344c6c22154570 WatchSource:0}: Error finding container faf9678a56ea4f83cbea1dfbd71098cad7dab56da4ca3413d4344c6c22154570: Status 404 returned error can't find the container with id faf9678a56ea4f83cbea1dfbd71098cad7dab56da4ca3413d4344c6c22154570 Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.583803 4826 generic.go:334] "Generic (PLEG): container finished" podID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" containerID="4cbfaf722516244e2f9b24f201dd0408cb6fed9e797c56329115ff175158572c" exitCode=0 Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.583978 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6k2p" event={"ID":"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28","Type":"ContainerDied","Data":"4cbfaf722516244e2f9b24f201dd0408cb6fed9e797c56329115ff175158572c"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.587607 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6k2p" event={"ID":"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28","Type":"ContainerStarted","Data":"e954ed47035ac9b361a3e794fc4009939297e4eccd7457344623063e00da3056"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.597735 4826 generic.go:334] "Generic (PLEG): container finished" podID="522ba915-3cf7-4e84-8ada-eae39676ac2b" containerID="edb995a083327111a48cd8f63e269e4771459f09b67f8098f874c1ffd2bed605" exitCode=0 Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.597855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh7qz" event={"ID":"522ba915-3cf7-4e84-8ada-eae39676ac2b","Type":"ContainerDied","Data":"edb995a083327111a48cd8f63e269e4771459f09b67f8098f874c1ffd2bed605"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.600758 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" event={"ID":"eaa327d7-8125-4e71-a07a-e905810d25b1","Type":"ContainerStarted","Data":"f116a6085c1ed669025e10f582007c6cd2e87dabaa305e39355cae5e533d9e8c"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.600846 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" event={"ID":"eaa327d7-8125-4e71-a07a-e905810d25b1","Type":"ContainerStarted","Data":"faf9678a56ea4f83cbea1dfbd71098cad7dab56da4ca3413d4344c6c22154570"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.600932 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.604713 4826 generic.go:334] "Generic (PLEG): container finished" podID="5cf2e4f6-5f98-449a-a380-946edc1521f1" containerID="af26b49d37a00c892b41943eda9c1212e071c7b9843e7879de561e4c75b9bd6b" exitCode=0 Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.604926 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nghjp" event={"ID":"5cf2e4f6-5f98-449a-a380-946edc1521f1","Type":"ContainerDied","Data":"af26b49d37a00c892b41943eda9c1212e071c7b9843e7879de561e4c75b9bd6b"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.605005 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nghjp" event={"ID":"5cf2e4f6-5f98-449a-a380-946edc1521f1","Type":"ContainerStarted","Data":"cf1e2be58afdf1e2e3a8b21498af732f3fa369a7c50e2eaef53a7aacf1984086"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.608286 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" event={"ID":"5931b10d-2194-4b94-9ace-2e92f37be85b","Type":"ContainerStarted","Data":"4fa3573c4ec03fc293a8d3b2aff9d12a3f64a1a7617e862c34483d276a9b0869"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.608344 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" event={"ID":"5931b10d-2194-4b94-9ace-2e92f37be85b","Type":"ContainerStarted","Data":"8bb41eb3ec78e226ce6bfef2537cda65384de08cd26f493d23f6db1a6c22dee5"} Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.609044 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.619852 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.683289 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" podStartSLOduration=2.683263826 podStartE2EDuration="2.683263826s" podCreationTimestamp="2026-01-31 07:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:59.658402053 +0000 UTC m=+231.512288412" watchObservedRunningTime="2026-01-31 07:39:59.683263826 +0000 UTC m=+231.537150185" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.699252 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bff9dc65d-mbw5n" podStartSLOduration=2.69922936 podStartE2EDuration="2.69922936s" podCreationTimestamp="2026-01-31 07:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:39:59.693517724 +0000 UTC m=+231.547404083" watchObservedRunningTime="2026-01-31 07:39:59.69922936 +0000 UTC m=+231.553115719" Jan 31 07:39:59 crc kubenswrapper[4826]: I0131 07:39:59.951228 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bc5b5bb7d-7vf5w" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.350831 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wpf7p"] Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.352367 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.355124 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.379125 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpf7p"] Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.462547 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrwv6\" (UniqueName: \"kubernetes.io/projected/095ed56c-d5dd-468f-85b2-f0bf23c2370d-kube-api-access-zrwv6\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.462681 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095ed56c-d5dd-468f-85b2-f0bf23c2370d-catalog-content\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.462735 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095ed56c-d5dd-468f-85b2-f0bf23c2370d-utilities\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.563869 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrwv6\" (UniqueName: \"kubernetes.io/projected/095ed56c-d5dd-468f-85b2-f0bf23c2370d-kube-api-access-zrwv6\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.563942 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095ed56c-d5dd-468f-85b2-f0bf23c2370d-catalog-content\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.564018 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095ed56c-d5dd-468f-85b2-f0bf23c2370d-utilities\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.564742 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095ed56c-d5dd-468f-85b2-f0bf23c2370d-utilities\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.564735 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095ed56c-d5dd-468f-85b2-f0bf23c2370d-catalog-content\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.587677 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrwv6\" (UniqueName: \"kubernetes.io/projected/095ed56c-d5dd-468f-85b2-f0bf23c2370d-kube-api-access-zrwv6\") pod \"certified-operators-wpf7p\" (UID: \"095ed56c-d5dd-468f-85b2-f0bf23c2370d\") " pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.616950 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh7qz" event={"ID":"522ba915-3cf7-4e84-8ada-eae39676ac2b","Type":"ContainerStarted","Data":"68dc53ffe87c4baa0df404b06b774b85d862ee21ea891fb30f401e4b616290ec"} Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.638067 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lh7qz" podStartSLOduration=3.07600086 podStartE2EDuration="5.638046513s" podCreationTimestamp="2026-01-31 07:39:55 +0000 UTC" firstStartedPulling="2026-01-31 07:39:57.508035848 +0000 UTC m=+229.361922207" lastFinishedPulling="2026-01-31 07:40:00.070081491 +0000 UTC m=+231.923967860" observedRunningTime="2026-01-31 07:40:00.637335362 +0000 UTC m=+232.491221721" watchObservedRunningTime="2026-01-31 07:40:00.638046513 +0000 UTC m=+232.491932872" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.670959 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:00 crc kubenswrapper[4826]: I0131 07:40:00.820813 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f49a1e-b32a-4e35-8b07-5bb7dd439d92" path="/var/lib/kubelet/pods/42f49a1e-b32a-4e35-8b07-5bb7dd439d92/volumes" Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.093763 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpf7p"] Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.624717 4826 generic.go:334] "Generic (PLEG): container finished" podID="5cf2e4f6-5f98-449a-a380-946edc1521f1" containerID="dd03c2610f29312de0bf49b8fb5b144c55e7e139aa4345d515a6f9c3b0a7b5ab" exitCode=0 Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.624797 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nghjp" event={"ID":"5cf2e4f6-5f98-449a-a380-946edc1521f1","Type":"ContainerDied","Data":"dd03c2610f29312de0bf49b8fb5b144c55e7e139aa4345d515a6f9c3b0a7b5ab"} Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.627909 4826 generic.go:334] "Generic (PLEG): container finished" podID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" containerID="695181d49fbec9afa3db3a25837e77f2f4d772d1e4a214f990358ce7845074ab" exitCode=0 Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.628016 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpf7p" event={"ID":"095ed56c-d5dd-468f-85b2-f0bf23c2370d","Type":"ContainerDied","Data":"695181d49fbec9afa3db3a25837e77f2f4d772d1e4a214f990358ce7845074ab"} Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.628059 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpf7p" event={"ID":"095ed56c-d5dd-468f-85b2-f0bf23c2370d","Type":"ContainerStarted","Data":"c4f4afca10d64aa1d12848e45760613dfe9bdeff84af966d201e9fc07fbd008a"} Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.631071 4826 generic.go:334] "Generic (PLEG): container finished" podID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" containerID="4b51ca9ab842d7122428db07827bad2e10fe9f9e5a9ef2bf68ef3f480d71c214" exitCode=0 Jan 31 07:40:01 crc kubenswrapper[4826]: I0131 07:40:01.631174 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6k2p" event={"ID":"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28","Type":"ContainerDied","Data":"4b51ca9ab842d7122428db07827bad2e10fe9f9e5a9ef2bf68ef3f480d71c214"} Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.646037 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nghjp" event={"ID":"5cf2e4f6-5f98-449a-a380-946edc1521f1","Type":"ContainerStarted","Data":"e3f1d0791e6ffeb64fc18a7cd4f1d0e7a1e74499dbc701a9ed43b2e1e5fee07d"} Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.648365 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6k2p" event={"ID":"2afad1ec-bc9f-48e1-9e8a-399fbde8bc28","Type":"ContainerStarted","Data":"6500f9d43d7ef290594a38f4854da2d6b353e621c66e66577417690ac5ef07f4"} Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.667905 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nghjp" podStartSLOduration=2.375348756 podStartE2EDuration="5.667886947s" podCreationTimestamp="2026-01-31 07:39:58 +0000 UTC" firstStartedPulling="2026-01-31 07:39:59.607220895 +0000 UTC m=+231.461107254" lastFinishedPulling="2026-01-31 07:40:02.899759086 +0000 UTC m=+234.753645445" observedRunningTime="2026-01-31 07:40:03.667527926 +0000 UTC m=+235.521414295" watchObservedRunningTime="2026-01-31 07:40:03.667886947 +0000 UTC m=+235.521773306" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.686208 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b6k2p" podStartSLOduration=3.781236066 podStartE2EDuration="6.686184969s" podCreationTimestamp="2026-01-31 07:39:57 +0000 UTC" firstStartedPulling="2026-01-31 07:39:59.587079759 +0000 UTC m=+231.440966108" lastFinishedPulling="2026-01-31 07:40:02.492028652 +0000 UTC m=+234.345915011" observedRunningTime="2026-01-31 07:40:03.683518621 +0000 UTC m=+235.537404980" watchObservedRunningTime="2026-01-31 07:40:03.686184969 +0000 UTC m=+235.540071328" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.761887 4826 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.762635 4826 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.762907 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa" gracePeriod=15 Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.763014 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab" gracePeriod=15 Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.763068 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.763009 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe" gracePeriod=15 Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.763128 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46" gracePeriod=15 Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.763479 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4" gracePeriod=15 Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767662 4826 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767857 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767871 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767880 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767887 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767896 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767901 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767912 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767917 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767927 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767932 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767942 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767947 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.767977 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.767985 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.768315 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.768367 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.768379 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.768399 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.768414 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.768425 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.800431 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.807048 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.807194 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.807257 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.807364 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.807512 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: E0131 07:40:03.889858 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-wpf7p.188fc0d5fd3e1c1e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-wpf7p,UID:095ed56c-d5dd-468f-85b2-f0bf23c2370d,APIVersion:v1,ResourceVersion:30053,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:40:03.88916739 +0000 UTC m=+235.743053749,LastTimestamp:2026-01-31 07:40:03.88916739 +0000 UTC m=+235.743053749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909199 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909302 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909360 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909387 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909397 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909444 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909482 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909482 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909515 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909551 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909586 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:03 crc kubenswrapper[4826]: I0131 07:40:03.909417 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.010609 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.010783 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.010856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.010792 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.010922 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.011102 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.101856 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:04 crc kubenswrapper[4826]: W0131 07:40:04.127680 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c2a9b8373340b63ea16c0aeb3588bb250565275775131bf5da1fbc208afd6bce WatchSource:0}: Error finding container c2a9b8373340b63ea16c0aeb3588bb250565275775131bf5da1fbc208afd6bce: Status 404 returned error can't find the container with id c2a9b8373340b63ea16c0aeb3588bb250565275775131bf5da1fbc208afd6bce Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.656262 4826 generic.go:334] "Generic (PLEG): container finished" podID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" containerID="1413fea6a78f8136296e208a59b15ef4f608564df8a7da7f88914644918579d4" exitCode=0 Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.656442 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpf7p" event={"ID":"095ed56c-d5dd-468f-85b2-f0bf23c2370d","Type":"ContainerDied","Data":"1413fea6a78f8136296e208a59b15ef4f608564df8a7da7f88914644918579d4"} Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.657232 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.658105 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.658461 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.658614 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6c255151f7c302aaa5ba21f8490f0ed2f28e5388c2a332890bf1796341b6ebe3"} Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.658655 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c2a9b8373340b63ea16c0aeb3588bb250565275775131bf5da1fbc208afd6bce"} Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.659014 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.659237 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.659901 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.661399 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.665932 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.667624 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46" exitCode=0 Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.667649 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4" exitCode=0 Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.667657 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab" exitCode=0 Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.667664 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe" exitCode=2 Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.667688 4826 scope.go:117] "RemoveContainer" containerID="975a5a5c09f940e9a57b6c6df02fdf725740fb21d70046ad5bbe3f24dec7a0bd" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.669370 4826 generic.go:334] "Generic (PLEG): container finished" podID="c10690e0-2f15-4102-93fc-caffd46cd9cc" containerID="7abd79bf41c8eaece424af6ebf0e655cf4a59719ce2520caaba58d95ce0dffa6" exitCode=0 Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.670198 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c10690e0-2f15-4102-93fc-caffd46cd9cc","Type":"ContainerDied","Data":"7abd79bf41c8eaece424af6ebf0e655cf4a59719ce2520caaba58d95ce0dffa6"} Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.671134 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.671570 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.671980 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.672285 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.782028 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 31 07:40:04 crc kubenswrapper[4826]: I0131 07:40:04.782088 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 31 07:40:05 crc kubenswrapper[4826]: I0131 07:40:05.689513 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.092311 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.093093 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.137923 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.138935 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.139336 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.139541 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.139736 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.232445 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.232978 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.233229 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.233411 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.233584 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.352179 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-var-lock\") pod \"c10690e0-2f15-4102-93fc-caffd46cd9cc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.352593 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-kubelet-dir\") pod \"c10690e0-2f15-4102-93fc-caffd46cd9cc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.352713 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c10690e0-2f15-4102-93fc-caffd46cd9cc-kube-api-access\") pod \"c10690e0-2f15-4102-93fc-caffd46cd9cc\" (UID: \"c10690e0-2f15-4102-93fc-caffd46cd9cc\") " Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.352279 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "c10690e0-2f15-4102-93fc-caffd46cd9cc" (UID: "c10690e0-2f15-4102-93fc-caffd46cd9cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.352646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c10690e0-2f15-4102-93fc-caffd46cd9cc" (UID: "c10690e0-2f15-4102-93fc-caffd46cd9cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.353306 4826 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.353405 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c10690e0-2f15-4102-93fc-caffd46cd9cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.357502 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10690e0-2f15-4102-93fc-caffd46cd9cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c10690e0-2f15-4102-93fc-caffd46cd9cc" (UID: "c10690e0-2f15-4102-93fc-caffd46cd9cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.454769 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c10690e0-2f15-4102-93fc-caffd46cd9cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.703456 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.704410 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa" exitCode=0 Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.706027 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.706019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c10690e0-2f15-4102-93fc-caffd46cd9cc","Type":"ContainerDied","Data":"d2a08a6588c5d35ae6a61fc305afeadfa8b1862126b86b88126c59ce302dd8d1"} Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.706199 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a08a6588c5d35ae6a61fc305afeadfa8b1862126b86b88126c59ce302dd8d1" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.720558 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.720763 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.720958 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.721454 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.752613 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lh7qz" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.753251 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.753721 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.754004 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:06 crc kubenswrapper[4826]: I0131 07:40:06.754268 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.334251 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.335757 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.336267 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.337025 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.337494 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.338136 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.338557 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.470887 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471009 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471065 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471095 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471146 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471266 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471616 4826 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471641 4826 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.471651 4826 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.714592 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpf7p" event={"ID":"095ed56c-d5dd-468f-85b2-f0bf23c2370d","Type":"ContainerStarted","Data":"6016f309adda7524bb2577199d3cf3f059951d9329ce59e5ebdc0150d5d51b81"} Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.716195 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.716367 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.716569 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.716857 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.717361 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.718684 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.719310 4826 scope.go:117] "RemoveContainer" containerID="dbf2c3303222b7cdaf3b5aa2c1847ff9e097579edeb4f7333079c340cc95ef46" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.719326 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.731532 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.731686 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.731824 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.731953 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.735801 4826 scope.go:117] "RemoveContainer" containerID="5cd3634cc3552a315ee1ef8114db096d456b7a97a92e68ab406ceccdf1eb57f4" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.735741 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.755642 4826 scope.go:117] "RemoveContainer" containerID="3caf57a9650c133c55f50d69726a83d40040bc7f173d536444d89f99854942ab" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.771824 4826 scope.go:117] "RemoveContainer" containerID="30b86aff0d4af7948ead36a9ed2f4fa0fb5888de8ecdfa6eb6ffc1867030babe" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.789062 4826 scope.go:117] "RemoveContainer" containerID="b561e0c482987fdc1c08682d00bf4c933f12313569408446d6c8a603c3556ffa" Jan 31 07:40:07 crc kubenswrapper[4826]: I0131 07:40:07.808308 4826 scope.go:117] "RemoveContainer" containerID="c2ae4e4bfaca3b84a3b14f972a70b5c20092692ba3bc72bfaf4dc0f6f6d5c884" Jan 31 07:40:08 crc kubenswrapper[4826]: E0131 07:40:08.270125 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-wpf7p.188fc0d5fd3e1c1e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-wpf7p,UID:095ed56c-d5dd-468f-85b2-f0bf23c2370d,APIVersion:v1,ResourceVersion:30053,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:40:03.88916739 +0000 UTC m=+235.743053749,LastTimestamp:2026-01-31 07:40:03.88916739 +0000 UTC m=+235.743053749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.279236 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.279298 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.319331 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.319887 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.320327 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.320723 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.320962 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.321238 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.321588 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.495050 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.495112 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.784315 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b6k2p" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.784894 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.785259 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.785710 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.785946 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.786199 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.786538 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.814285 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.815004 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.815366 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.815595 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.815859 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.816181 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:08 crc kubenswrapper[4826]: I0131 07:40:08.818955 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.465321 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.466301 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.466514 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.466693 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.466916 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:09 crc kubenswrapper[4826]: I0131 07:40:09.467029 4826 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.467288 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Jan 31 07:40:09 crc kubenswrapper[4826]: I0131 07:40:09.534552 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nghjp" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" containerName="registry-server" probeResult="failure" output=< Jan 31 07:40:09 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 07:40:09 crc kubenswrapper[4826]: > Jan 31 07:40:09 crc kubenswrapper[4826]: E0131 07:40:09.667652 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Jan 31 07:40:10 crc kubenswrapper[4826]: E0131 07:40:10.068989 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.672780 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.672829 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.713356 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.714057 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.714410 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.714893 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.715494 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:10 crc kubenswrapper[4826]: I0131 07:40:10.715992 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:10 crc kubenswrapper[4826]: E0131 07:40:10.869432 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Jan 31 07:40:12 crc kubenswrapper[4826]: E0131 07:40:12.471176 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Jan 31 07:40:14 crc kubenswrapper[4826]: I0131 07:40:14.984957 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" podUID="f8a24898-167c-483a-9d54-7412fb063199" containerName="oauth-openshift" containerID="cri-o://9f4d67605384b48f50722e0a3d0a519057f33e157fc510fef4f626b620ca440e" gracePeriod=15 Jan 31 07:40:15 crc kubenswrapper[4826]: E0131 07:40:15.672405 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="6.4s" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.784088 4826 generic.go:334] "Generic (PLEG): container finished" podID="f8a24898-167c-483a-9d54-7412fb063199" containerID="9f4d67605384b48f50722e0a3d0a519057f33e157fc510fef4f626b620ca440e" exitCode=0 Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.784145 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" event={"ID":"f8a24898-167c-483a-9d54-7412fb063199","Type":"ContainerDied","Data":"9f4d67605384b48f50722e0a3d0a519057f33e157fc510fef4f626b620ca440e"} Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.808837 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.809800 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.810424 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.811121 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.811712 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.812309 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.834826 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.835010 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:16 crc kubenswrapper[4826]: E0131 07:40:16.835905 4826 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:16 crc kubenswrapper[4826]: I0131 07:40:16.836944 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:16 crc kubenswrapper[4826]: W0131 07:40:16.872204 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6f658b1b039c34a23396ce4efd85a28fb17441e4af8c2237c81a602b54528411 WatchSource:0}: Error finding container 6f658b1b039c34a23396ce4efd85a28fb17441e4af8c2237c81a602b54528411: Status 404 returned error can't find the container with id 6f658b1b039c34a23396ce4efd85a28fb17441e4af8c2237c81a602b54528411 Jan 31 07:40:17 crc kubenswrapper[4826]: I0131 07:40:17.789103 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f658b1b039c34a23396ce4efd85a28fb17441e4af8c2237c81a602b54528411"} Jan 31 07:40:18 crc kubenswrapper[4826]: E0131 07:40:18.271397 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-wpf7p.188fc0d5fd3e1c1e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-wpf7p,UID:095ed56c-d5dd-468f-85b2-f0bf23c2370d,APIVersion:v1,ResourceVersion:30053,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 07:40:03.88916739 +0000 UTC m=+235.743053749,LastTimestamp:2026-01-31 07:40:03.88916739 +0000 UTC m=+235.743053749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.532382 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.533130 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.533333 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.533528 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.533795 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.534050 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.534296 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.581207 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nghjp" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.581624 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.581857 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.582078 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.582375 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.582751 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.582986 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.816319 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.816867 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.817634 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.817829 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.818005 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.818162 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:18 crc kubenswrapper[4826]: I0131 07:40:18.818308 4826 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.048752 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.049411 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.049858 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.050170 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.050459 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.050652 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.050837 4826 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.051065 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.051266 4826 status_manager.go:851] "Failed to get status for pod" podUID="f8a24898-167c-483a-9d54-7412fb063199" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cgwgw\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-provider-selection\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241505 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j8n2\" (UniqueName: \"kubernetes.io/projected/f8a24898-167c-483a-9d54-7412fb063199-kube-api-access-4j8n2\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241534 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-error\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241561 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-router-certs\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241592 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-service-ca\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241631 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-session\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241670 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-audit-policies\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241726 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-idp-0-file-data\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241752 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-trusted-ca-bundle\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241787 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-login\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241826 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-serving-cert\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241857 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-cliconfig\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241905 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-ocp-branding-template\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.241947 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8a24898-167c-483a-9d54-7412fb063199-audit-dir\") pod \"f8a24898-167c-483a-9d54-7412fb063199\" (UID: \"f8a24898-167c-483a-9d54-7412fb063199\") " Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.242318 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a24898-167c-483a-9d54-7412fb063199-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.242550 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.242541 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.243192 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.243257 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.260023 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.260149 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a24898-167c-483a-9d54-7412fb063199-kube-api-access-4j8n2" (OuterVolumeSpecName: "kube-api-access-4j8n2") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "kube-api-access-4j8n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.260356 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.261333 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.261381 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.261605 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.261848 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.262036 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.263892 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f8a24898-167c-483a-9d54-7412fb063199" (UID: "f8a24898-167c-483a-9d54-7412fb063199"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343591 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343642 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343654 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343663 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343675 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343684 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343694 4826 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8a24898-167c-483a-9d54-7412fb063199-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343706 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343717 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j8n2\" (UniqueName: \"kubernetes.io/projected/f8a24898-167c-483a-9d54-7412fb063199-kube-api-access-4j8n2\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343727 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343737 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343745 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343755 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8a24898-167c-483a-9d54-7412fb063199-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.343763 4826 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8a24898-167c-483a-9d54-7412fb063199-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.815538 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.815600 4826 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5" exitCode=1 Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.815718 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5"} Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.816075 4826 scope.go:117] "RemoveContainer" containerID="5fa95fe2bb0adc0053648f26282e011bad0ea1327773688c954b8ffbf222d6e5" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.817645 4826 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.818688 4826 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.819522 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.819793 4826 status_manager.go:851] "Failed to get status for pod" podUID="f8a24898-167c-483a-9d54-7412fb063199" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cgwgw\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.820068 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.820328 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.820674 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.820821 4826 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0d828ef8653aa470cd16517fe41e401087b741f0a23c49160f1730f1de433df7" exitCode=0 Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.820884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0d828ef8653aa470cd16517fe41e401087b741f0a23c49160f1730f1de433df7"} Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.821091 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.821115 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.821279 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: E0131 07:40:19.821365 4826 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.821600 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.822361 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.822927 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.823375 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.823605 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.823773 4826 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.824021 4826 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.824226 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.824442 4826 status_manager.go:851] "Failed to get status for pod" podUID="f8a24898-167c-483a-9d54-7412fb063199" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cgwgw\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.824637 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.828423 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" event={"ID":"f8a24898-167c-483a-9d54-7412fb063199","Type":"ContainerDied","Data":"49bbbfb4b01b2d3002a038a885b7f3b025d9aa3f892994461cdb72eee022dce0"} Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.828461 4826 scope.go:117] "RemoveContainer" containerID="9f4d67605384b48f50722e0a3d0a519057f33e157fc510fef4f626b620ca440e" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.828540 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.829381 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.829575 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.829862 4826 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.830081 4826 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.830260 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.830399 4826 status_manager.go:851] "Failed to get status for pod" podUID="f8a24898-167c-483a-9d54-7412fb063199" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cgwgw\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.830572 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.830746 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.830925 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.868231 4826 status_manager.go:851] "Failed to get status for pod" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.868775 4826 status_manager.go:851] "Failed to get status for pod" podUID="5cf2e4f6-5f98-449a-a380-946edc1521f1" pod="openshift-marketplace/redhat-operators-nghjp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nghjp\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.869160 4826 status_manager.go:851] "Failed to get status for pod" podUID="095ed56c-d5dd-468f-85b2-f0bf23c2370d" pod="openshift-marketplace/certified-operators-wpf7p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-wpf7p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.869455 4826 status_manager.go:851] "Failed to get status for pod" podUID="2afad1ec-bc9f-48e1-9e8a-399fbde8bc28" pod="openshift-marketplace/redhat-marketplace-b6k2p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b6k2p\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.869838 4826 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.870134 4826 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.870424 4826 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.870763 4826 status_manager.go:851] "Failed to get status for pod" podUID="522ba915-3cf7-4e84-8ada-eae39676ac2b" pod="openshift-marketplace/community-operators-lh7qz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh7qz\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:19 crc kubenswrapper[4826]: I0131 07:40:19.871038 4826 status_manager.go:851] "Failed to get status for pod" podUID="f8a24898-167c-483a-9d54-7412fb063199" pod="openshift-authentication/oauth-openshift-558db77b4-cgwgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cgwgw\": dial tcp 38.102.83.13:6443: connect: connection refused" Jan 31 07:40:20 crc kubenswrapper[4826]: I0131 07:40:20.730799 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wpf7p" Jan 31 07:40:20 crc kubenswrapper[4826]: I0131 07:40:20.841825 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 07:40:20 crc kubenswrapper[4826]: I0131 07:40:20.841922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87ae81130937b3e40198c652d2df4007f99e04df389490c6753b739e032c7cde"} Jan 31 07:40:20 crc kubenswrapper[4826]: I0131 07:40:20.845206 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"015e130da716003376048dad4d8289756209f31899f8fc981e73896343bd385e"} Jan 31 07:40:20 crc kubenswrapper[4826]: I0131 07:40:20.845257 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e67ddc9722bf0607540eafa97a0a005442e18d5ba7f3245aefa2896101ec5ea1"} Jan 31 07:40:21 crc kubenswrapper[4826]: I0131 07:40:21.860149 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"43653fe98ec1a39cffe02545e626086c28e0ce96cac0f1ac2d287db6541cf24f"} Jan 31 07:40:21 crc kubenswrapper[4826]: I0131 07:40:21.860536 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81f4e433f88cda8bf1fa2f041c03c993879b198dbaf07c4888068aeec734170a"} Jan 31 07:40:22 crc kubenswrapper[4826]: I0131 07:40:22.868575 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"450f074675fbd32e3d75968c286ce14e5135b2186a96dd23a227e91b6ab8ed3a"} Jan 31 07:40:22 crc kubenswrapper[4826]: I0131 07:40:22.868827 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:22 crc kubenswrapper[4826]: I0131 07:40:22.868942 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:22 crc kubenswrapper[4826]: I0131 07:40:22.868963 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:23 crc kubenswrapper[4826]: I0131 07:40:23.746595 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:40:23 crc kubenswrapper[4826]: I0131 07:40:23.752641 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:40:23 crc kubenswrapper[4826]: I0131 07:40:23.873473 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:40:26 crc kubenswrapper[4826]: I0131 07:40:26.838333 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:26 crc kubenswrapper[4826]: I0131 07:40:26.840406 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:26 crc kubenswrapper[4826]: I0131 07:40:26.851888 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:27 crc kubenswrapper[4826]: I0131 07:40:27.878701 4826 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:28 crc kubenswrapper[4826]: I0131 07:40:28.856470 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aed11cdf-b25d-4c39-8ce2-db0ac848c514" Jan 31 07:40:28 crc kubenswrapper[4826]: I0131 07:40:28.904845 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:28 crc kubenswrapper[4826]: I0131 07:40:28.904889 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:28 crc kubenswrapper[4826]: I0131 07:40:28.909340 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aed11cdf-b25d-4c39-8ce2-db0ac848c514" Jan 31 07:40:28 crc kubenswrapper[4826]: I0131 07:40:28.912487 4826 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://e67ddc9722bf0607540eafa97a0a005442e18d5ba7f3245aefa2896101ec5ea1" Jan 31 07:40:28 crc kubenswrapper[4826]: I0131 07:40:28.912663 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:29 crc kubenswrapper[4826]: I0131 07:40:29.912331 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:29 crc kubenswrapper[4826]: I0131 07:40:29.912374 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63a72bdc-ae2c-4ce3-bad4-877f01e2b370" Jan 31 07:40:29 crc kubenswrapper[4826]: I0131 07:40:29.916261 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aed11cdf-b25d-4c39-8ce2-db0ac848c514" Jan 31 07:40:34 crc kubenswrapper[4826]: I0131 07:40:34.395295 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.032749 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.503478 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.514167 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.571173 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.796580 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.844141 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 07:40:37 crc kubenswrapper[4826]: I0131 07:40:37.929058 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.346928 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.371872 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.551387 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.607802 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.629423 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.722543 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.781046 4826 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 07:40:38 crc kubenswrapper[4826]: I0131 07:40:38.805780 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.018410 4826 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.021275 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.021246787 podStartE2EDuration="36.021246787s" podCreationTimestamp="2026-01-31 07:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:26.909401465 +0000 UTC m=+258.763287824" watchObservedRunningTime="2026-01-31 07:40:39.021246787 +0000 UTC m=+270.875133146" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.022821 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wpf7p" podStartSLOduration=35.280681031 podStartE2EDuration="39.022810163s" podCreationTimestamp="2026-01-31 07:40:00 +0000 UTC" firstStartedPulling="2026-01-31 07:40:01.629922099 +0000 UTC m=+233.483808458" lastFinishedPulling="2026-01-31 07:40:05.372051231 +0000 UTC m=+237.225937590" observedRunningTime="2026-01-31 07:40:26.831096258 +0000 UTC m=+258.684982677" watchObservedRunningTime="2026-01-31 07:40:39.022810163 +0000 UTC m=+270.876696522" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.025703 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cgwgw","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.025767 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.034304 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.052219 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.052192217 podStartE2EDuration="12.052192217s" podCreationTimestamp="2026-01-31 07:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:40:39.051465425 +0000 UTC m=+270.905351824" watchObservedRunningTime="2026-01-31 07:40:39.052192217 +0000 UTC m=+270.906078576" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.410928 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.443699 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.495134 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.618736 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.802917 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.913790 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 07:40:39 crc kubenswrapper[4826]: I0131 07:40:39.993327 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.044635 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.065280 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.160667 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.183088 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.207584 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.231851 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.357194 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.573007 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.577691 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.657825 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.701547 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.702678 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.744425 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.773189 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.775885 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.787305 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.816960 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a24898-167c-483a-9d54-7412fb063199" path="/var/lib/kubelet/pods/f8a24898-167c-483a-9d54-7412fb063199/volumes" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.862702 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.872447 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.910610 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.911508 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 07:40:40 crc kubenswrapper[4826]: I0131 07:40:40.970571 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.012918 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.034792 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.047727 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.065493 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.070932 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.245880 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.261506 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.328597 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.343194 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.391192 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.402704 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.448182 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.500476 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.520500 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.646418 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.646929 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.653276 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.685711 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.732684 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.751110 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.820670 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.911218 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 07:40:41 crc kubenswrapper[4826]: I0131 07:40:41.988364 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.059361 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.072338 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.116270 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.204457 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.266646 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.267422 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.286172 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.353328 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.414792 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.671239 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.743374 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.827483 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.868332 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 07:40:42 crc kubenswrapper[4826]: I0131 07:40:42.898869 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.079440 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.113329 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.146778 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.155076 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.279224 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.505426 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.526747 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.629462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.630119 4826 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.661113 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.686431 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.719529 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.752794 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.896597 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.921623 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 07:40:43 crc kubenswrapper[4826]: I0131 07:40:43.950380 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.028351 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.139843 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.207676 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.245646 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.324159 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.340680 4826 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.342027 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.394360 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.410292 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.455544 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.484804 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.497736 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.671350 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.681783 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.810409 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.811875 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.952123 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 07:40:44 crc kubenswrapper[4826]: I0131 07:40:44.966588 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.135492 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.147170 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.195134 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.221075 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.237802 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.242690 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.288505 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.500021 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.598742 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.790229 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.828742 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.880612 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.894568 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 07:40:45 crc kubenswrapper[4826]: I0131 07:40:45.997207 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.092081 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.174102 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.174180 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.184734 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.281048 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.313999 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.480267 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.481825 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.506989 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.535816 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.559161 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.631097 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.795167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.804402 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.810699 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.877773 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.899263 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.906150 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.908927 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 07:40:46 crc kubenswrapper[4826]: I0131 07:40:46.939238 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.052105 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.132530 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.145771 4826 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.187185 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.200614 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.205668 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.232211 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.300507 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.428763 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.530429 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.530824 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.560404 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.570300 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.627461 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.659586 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.687093 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.699827 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.711475 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.750618 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.753262 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.763891 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 07:40:47 crc kubenswrapper[4826]: I0131 07:40:47.961454 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.022955 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.052382 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.072599 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.111455 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.225052 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.236757 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.298529 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.452048 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.636065 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.760411 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.761096 4826 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.855135 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.915364 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.931394 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 07:40:48 crc kubenswrapper[4826]: I0131 07:40:48.988708 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.019333 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.031021 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.069726 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.152679 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.160601 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.162155 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.284118 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.404032 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.424838 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.499452 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.561444 4826 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.562030 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6c255151f7c302aaa5ba21f8490f0ed2f28e5388c2a332890bf1796341b6ebe3" gracePeriod=5 Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.576845 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.618888 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.654301 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.658848 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.837066 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 07:40:49 crc kubenswrapper[4826]: I0131 07:40:49.857469 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.040464 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.098473 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.312937 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.325185 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.325884 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.380595 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.405322 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.414988 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.691784 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.885891 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.887238 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 07:40:50 crc kubenswrapper[4826]: I0131 07:40:50.924276 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 07:40:51 crc kubenswrapper[4826]: I0131 07:40:51.010478 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 07:40:51 crc kubenswrapper[4826]: I0131 07:40:51.030774 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 07:40:51 crc kubenswrapper[4826]: I0131 07:40:51.100205 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 07:40:51 crc kubenswrapper[4826]: I0131 07:40:51.106070 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 07:40:51 crc kubenswrapper[4826]: I0131 07:40:51.303049 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 07:40:51 crc kubenswrapper[4826]: I0131 07:40:51.927994 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 07:40:52 crc kubenswrapper[4826]: I0131 07:40:52.133102 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 07:40:52 crc kubenswrapper[4826]: I0131 07:40:52.249111 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 07:40:52 crc kubenswrapper[4826]: I0131 07:40:52.557644 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 07:40:52 crc kubenswrapper[4826]: I0131 07:40:52.641680 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 07:40:52 crc kubenswrapper[4826]: I0131 07:40:52.852280 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.150323 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.150405 4826 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6c255151f7c302aaa5ba21f8490f0ed2f28e5388c2a332890bf1796341b6ebe3" exitCode=137 Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.150462 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a9b8373340b63ea16c0aeb3588bb250565275775131bf5da1fbc208afd6bce" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.166552 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.166638 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.227678 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.227728 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.227919 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228477 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228610 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228648 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228701 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228902 4826 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228940 4826 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.228814 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.247527 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.329588 4826 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.329653 4826 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:55 crc kubenswrapper[4826]: I0131 07:40:55.329674 4826 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.155645 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.819372 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.819660 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.829473 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.829533 4826 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="038b5cae-e7c9-4c91-930e-d0c8d5d56b9d" Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.832936 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 07:40:56 crc kubenswrapper[4826]: I0131 07:40:56.833001 4826 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="038b5cae-e7c9-4c91-930e-d0c8d5d56b9d" Jan 31 07:41:07 crc kubenswrapper[4826]: I0131 07:41:07.121689 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 07:41:08 crc kubenswrapper[4826]: I0131 07:41:08.605916 4826 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 07:41:08 crc kubenswrapper[4826]: I0131 07:41:08.980145 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 07:41:10 crc kubenswrapper[4826]: I0131 07:41:10.514431 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 07:41:11 crc kubenswrapper[4826]: I0131 07:41:11.194859 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 07:41:11 crc kubenswrapper[4826]: I0131 07:41:11.437795 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 07:41:14 crc kubenswrapper[4826]: I0131 07:41:14.890293 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 07:41:16 crc kubenswrapper[4826]: I0131 07:41:16.091737 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.612154 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-85d79b69d6-xw7j2"] Jan 31 07:41:17 crc kubenswrapper[4826]: E0131 07:41:17.612949 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a24898-167c-483a-9d54-7412fb063199" containerName="oauth-openshift" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.612979 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a24898-167c-483a-9d54-7412fb063199" containerName="oauth-openshift" Jan 31 07:41:17 crc kubenswrapper[4826]: E0131 07:41:17.612997 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" containerName="installer" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.613004 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" containerName="installer" Jan 31 07:41:17 crc kubenswrapper[4826]: E0131 07:41:17.613014 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.613021 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.613129 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10690e0-2f15-4102-93fc-caffd46cd9cc" containerName="installer" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.613141 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.613151 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a24898-167c-483a-9d54-7412fb063199" containerName="oauth-openshift" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.613562 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.615340 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.615532 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.615561 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.617229 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.617289 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.638603 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.641068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.641103 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.641240 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.641267 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.643558 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.643884 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.657735 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.662869 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85d79b69d6-xw7j2"] Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.663437 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.670739 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.740531 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741420 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741505 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-session\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741633 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-error\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741662 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-audit-policies\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741685 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-login\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741759 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5v9z\" (UniqueName: \"kubernetes.io/projected/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-kube-api-access-m5v9z\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741795 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741823 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741875 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741917 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-audit-dir\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.741950 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.818877 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843598 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843667 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-session\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843705 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-error\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843741 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-audit-policies\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843773 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-login\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843799 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5v9z\" (UniqueName: \"kubernetes.io/projected/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-kube-api-access-m5v9z\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843834 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843858 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843893 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843924 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-audit-dir\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.843952 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.844008 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.844035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.844059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.844755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.844808 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-audit-policies\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.845713 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-audit-dir\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.846325 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.846857 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.849839 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.850755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.850807 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.851104 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.852427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-error\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.854786 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.855151 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-user-template-login\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.857928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-v4-0-config-system-session\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.860334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5v9z\" (UniqueName: \"kubernetes.io/projected/89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8-kube-api-access-m5v9z\") pod \"oauth-openshift-85d79b69d6-xw7j2\" (UID: \"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8\") " pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:17 crc kubenswrapper[4826]: I0131 07:41:17.971614 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:18 crc kubenswrapper[4826]: I0131 07:41:18.409763 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85d79b69d6-xw7j2"] Jan 31 07:41:18 crc kubenswrapper[4826]: I0131 07:41:18.845516 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 07:41:19 crc kubenswrapper[4826]: I0131 07:41:19.298016 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" event={"ID":"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8","Type":"ContainerStarted","Data":"6174233a50110482e30749c3bef450de19b361585627cf6bafe3702455f3dce9"} Jan 31 07:41:19 crc kubenswrapper[4826]: I0131 07:41:19.298335 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" event={"ID":"89f9974c-fd6f-4f0d-a2c3-c5a89c30b8a8","Type":"ContainerStarted","Data":"43234b1f18657a76ff91532b3f56a468b60d08457d6a89c5331cd474e78ca9b0"} Jan 31 07:41:19 crc kubenswrapper[4826]: I0131 07:41:19.298350 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:19 crc kubenswrapper[4826]: I0131 07:41:19.306188 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" Jan 31 07:41:19 crc kubenswrapper[4826]: I0131 07:41:19.321303 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85d79b69d6-xw7j2" podStartSLOduration=90.321276554 podStartE2EDuration="1m30.321276554s" podCreationTimestamp="2026-01-31 07:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:41:19.318209614 +0000 UTC m=+311.172095973" watchObservedRunningTime="2026-01-31 07:41:19.321276554 +0000 UTC m=+311.175162933" Jan 31 07:41:25 crc kubenswrapper[4826]: I0131 07:41:25.107719 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 07:41:25 crc kubenswrapper[4826]: I0131 07:41:25.788331 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 07:41:27 crc kubenswrapper[4826]: I0131 07:41:27.274567 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.008037 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xvg7w"] Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.009308 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.022738 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xvg7w"] Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159042 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90f13f20-37a0-4255-ae4e-40d346f004c7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159101 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-bound-sa-token\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159142 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8g7t\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-kube-api-access-n8g7t\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159171 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90f13f20-37a0-4255-ae4e-40d346f004c7-registry-certificates\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159211 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159254 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90f13f20-37a0-4255-ae4e-40d346f004c7-trusted-ca\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159348 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90f13f20-37a0-4255-ae4e-40d346f004c7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.159484 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-registry-tls\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.184349 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260615 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-registry-tls\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260701 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90f13f20-37a0-4255-ae4e-40d346f004c7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260726 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-bound-sa-token\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260751 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8g7t\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-kube-api-access-n8g7t\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260767 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90f13f20-37a0-4255-ae4e-40d346f004c7-registry-certificates\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260796 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90f13f20-37a0-4255-ae4e-40d346f004c7-trusted-ca\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.260839 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90f13f20-37a0-4255-ae4e-40d346f004c7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.261397 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90f13f20-37a0-4255-ae4e-40d346f004c7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.262073 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90f13f20-37a0-4255-ae4e-40d346f004c7-trusted-ca\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.263605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90f13f20-37a0-4255-ae4e-40d346f004c7-registry-certificates\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.266600 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90f13f20-37a0-4255-ae4e-40d346f004c7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.267902 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-registry-tls\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.274940 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-bound-sa-token\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.279775 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8g7t\" (UniqueName: \"kubernetes.io/projected/90f13f20-37a0-4255-ae4e-40d346f004c7-kube-api-access-n8g7t\") pod \"image-registry-66df7c8f76-xvg7w\" (UID: \"90f13f20-37a0-4255-ae4e-40d346f004c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.332962 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:04 crc kubenswrapper[4826]: I0131 07:42:04.777342 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xvg7w"] Jan 31 07:42:05 crc kubenswrapper[4826]: I0131 07:42:05.566130 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" event={"ID":"90f13f20-37a0-4255-ae4e-40d346f004c7","Type":"ContainerStarted","Data":"a6350402908cd2a5fc5ca348e571db7a626449fdd08f5bb7862c69cad4920f67"} Jan 31 07:42:05 crc kubenswrapper[4826]: I0131 07:42:05.566510 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" event={"ID":"90f13f20-37a0-4255-ae4e-40d346f004c7","Type":"ContainerStarted","Data":"3496a30c22fc530b89186824b33be33d60f18fdd5ac7f12076d044a9cc46a4d4"} Jan 31 07:42:05 crc kubenswrapper[4826]: I0131 07:42:05.566536 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:05 crc kubenswrapper[4826]: I0131 07:42:05.587164 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" podStartSLOduration=2.587139325 podStartE2EDuration="2.587139325s" podCreationTimestamp="2026-01-31 07:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:42:05.586897108 +0000 UTC m=+357.440783487" watchObservedRunningTime="2026-01-31 07:42:05.587139325 +0000 UTC m=+357.441025694" Jan 31 07:42:24 crc kubenswrapper[4826]: I0131 07:42:24.344046 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xvg7w" Jan 31 07:42:24 crc kubenswrapper[4826]: I0131 07:42:24.430620 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmkrm"] Jan 31 07:42:27 crc kubenswrapper[4826]: I0131 07:42:27.377499 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:42:27 crc kubenswrapper[4826]: I0131 07:42:27.377867 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.480194 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" podUID="d37f6dbf-56d4-46b2-8808-31999002461b" containerName="registry" containerID="cri-o://2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4" gracePeriod=30 Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.822172 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.866719 4826 generic.go:334] "Generic (PLEG): container finished" podID="d37f6dbf-56d4-46b2-8808-31999002461b" containerID="2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4" exitCode=0 Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.866812 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" event={"ID":"d37f6dbf-56d4-46b2-8808-31999002461b","Type":"ContainerDied","Data":"2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4"} Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.866851 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" event={"ID":"d37f6dbf-56d4-46b2-8808-31999002461b","Type":"ContainerDied","Data":"6aca2b5f0d3504ba25389c3bdca2688e0a258c88de2a7ed95f6cf830af0c1c08"} Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.866874 4826 scope.go:117] "RemoveContainer" containerID="2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.867064 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmkrm" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882093 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-trusted-ca\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882198 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d37f6dbf-56d4-46b2-8808-31999002461b-installation-pull-secrets\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882530 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882610 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d37f6dbf-56d4-46b2-8808-31999002461b-ca-trust-extracted\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882643 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-registry-certificates\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882724 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-registry-tls\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882769 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-bound-sa-token\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.882806 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-kube-api-access-fwlsr\") pod \"d37f6dbf-56d4-46b2-8808-31999002461b\" (UID: \"d37f6dbf-56d4-46b2-8808-31999002461b\") " Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.883648 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.883828 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.893012 4826 scope.go:117] "RemoveContainer" containerID="2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4" Jan 31 07:42:49 crc kubenswrapper[4826]: E0131 07:42:49.893831 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4\": container with ID starting with 2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4 not found: ID does not exist" containerID="2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.893889 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4"} err="failed to get container status \"2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4\": rpc error: code = NotFound desc = could not find container \"2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4\": container with ID starting with 2f5a630b690fa1ab1ade9ed27723f6cbba0c7f9d63041f8a78aecfae7f63e8c4 not found: ID does not exist" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.904126 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.905207 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.905301 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-kube-api-access-fwlsr" (OuterVolumeSpecName: "kube-api-access-fwlsr") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "kube-api-access-fwlsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.905702 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d37f6dbf-56d4-46b2-8808-31999002461b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.906279 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.908439 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37f6dbf-56d4-46b2-8808-31999002461b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d37f6dbf-56d4-46b2-8808-31999002461b" (UID: "d37f6dbf-56d4-46b2-8808-31999002461b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.984875 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.984936 4826 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d37f6dbf-56d4-46b2-8808-31999002461b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.984960 4826 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d37f6dbf-56d4-46b2-8808-31999002461b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.985002 4826 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d37f6dbf-56d4-46b2-8808-31999002461b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.985020 4826 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.985037 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:49 crc kubenswrapper[4826]: I0131 07:42:49.985056 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/d37f6dbf-56d4-46b2-8808-31999002461b-kube-api-access-fwlsr\") on node \"crc\" DevicePath \"\"" Jan 31 07:42:50 crc kubenswrapper[4826]: I0131 07:42:50.220387 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmkrm"] Jan 31 07:42:50 crc kubenswrapper[4826]: I0131 07:42:50.225246 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmkrm"] Jan 31 07:42:50 crc kubenswrapper[4826]: I0131 07:42:50.817172 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37f6dbf-56d4-46b2-8808-31999002461b" path="/var/lib/kubelet/pods/d37f6dbf-56d4-46b2-8808-31999002461b/volumes" Jan 31 07:42:57 crc kubenswrapper[4826]: I0131 07:42:57.376895 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:42:57 crc kubenswrapper[4826]: I0131 07:42:57.377289 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:43:27 crc kubenswrapper[4826]: I0131 07:43:27.377490 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:43:27 crc kubenswrapper[4826]: I0131 07:43:27.378350 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:43:27 crc kubenswrapper[4826]: I0131 07:43:27.378428 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:43:27 crc kubenswrapper[4826]: I0131 07:43:27.379443 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f565241d95f4e9819f53c9108aab47ac5428c731551db742e200ccf23cd4ae76"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:43:27 crc kubenswrapper[4826]: I0131 07:43:27.379603 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://f565241d95f4e9819f53c9108aab47ac5428c731551db742e200ccf23cd4ae76" gracePeriod=600 Jan 31 07:43:28 crc kubenswrapper[4826]: I0131 07:43:28.109393 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="f565241d95f4e9819f53c9108aab47ac5428c731551db742e200ccf23cd4ae76" exitCode=0 Jan 31 07:43:28 crc kubenswrapper[4826]: I0131 07:43:28.109472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"f565241d95f4e9819f53c9108aab47ac5428c731551db742e200ccf23cd4ae76"} Jan 31 07:43:28 crc kubenswrapper[4826]: I0131 07:43:28.109865 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"96981d99c8cc310fa2d207d4a923209b26ac15505f9c721483f2cdae8f82dca1"} Jan 31 07:43:28 crc kubenswrapper[4826]: I0131 07:43:28.109895 4826 scope.go:117] "RemoveContainer" containerID="d03515f366b860c28601b042a9a45c022c9a25aa15fab97b3e3698d3d7c1eb37" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.241136 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b"] Jan 31 07:45:00 crc kubenswrapper[4826]: E0131 07:45:00.242127 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37f6dbf-56d4-46b2-8808-31999002461b" containerName="registry" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.242148 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37f6dbf-56d4-46b2-8808-31999002461b" containerName="registry" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.242334 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37f6dbf-56d4-46b2-8808-31999002461b" containerName="registry" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.242908 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.246430 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.247066 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.252552 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b"] Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.335368 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410c48f0-0679-4a1f-8fec-9afb56cf0d60-config-volume\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.335541 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410c48f0-0679-4a1f-8fec-9afb56cf0d60-secret-volume\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.335611 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhzj\" (UniqueName: \"kubernetes.io/projected/410c48f0-0679-4a1f-8fec-9afb56cf0d60-kube-api-access-tdhzj\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.437217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410c48f0-0679-4a1f-8fec-9afb56cf0d60-config-volume\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.437392 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410c48f0-0679-4a1f-8fec-9afb56cf0d60-secret-volume\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.437463 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdhzj\" (UniqueName: \"kubernetes.io/projected/410c48f0-0679-4a1f-8fec-9afb56cf0d60-kube-api-access-tdhzj\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.438829 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410c48f0-0679-4a1f-8fec-9afb56cf0d60-config-volume\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.446676 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410c48f0-0679-4a1f-8fec-9afb56cf0d60-secret-volume\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.467915 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdhzj\" (UniqueName: \"kubernetes.io/projected/410c48f0-0679-4a1f-8fec-9afb56cf0d60-kube-api-access-tdhzj\") pod \"collect-profiles-29497425-x9m8b\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.576425 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:00 crc kubenswrapper[4826]: I0131 07:45:00.818365 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b"] Jan 31 07:45:01 crc kubenswrapper[4826]: I0131 07:45:01.714192 4826 generic.go:334] "Generic (PLEG): container finished" podID="410c48f0-0679-4a1f-8fec-9afb56cf0d60" containerID="40f6c6f0bb44f4ae05946548679c4fc96f047e642652cc1ee0932efca3c56891" exitCode=0 Jan 31 07:45:01 crc kubenswrapper[4826]: I0131 07:45:01.714301 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" event={"ID":"410c48f0-0679-4a1f-8fec-9afb56cf0d60","Type":"ContainerDied","Data":"40f6c6f0bb44f4ae05946548679c4fc96f047e642652cc1ee0932efca3c56891"} Jan 31 07:45:01 crc kubenswrapper[4826]: I0131 07:45:01.714574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" event={"ID":"410c48f0-0679-4a1f-8fec-9afb56cf0d60","Type":"ContainerStarted","Data":"1a0f035e71edc45176b1af4c18be381fe33165e67b7b80075641f058b78f52c4"} Jan 31 07:45:02 crc kubenswrapper[4826]: I0131 07:45:02.973421 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.074514 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410c48f0-0679-4a1f-8fec-9afb56cf0d60-config-volume\") pod \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.074599 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410c48f0-0679-4a1f-8fec-9afb56cf0d60-secret-volume\") pod \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.074658 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdhzj\" (UniqueName: \"kubernetes.io/projected/410c48f0-0679-4a1f-8fec-9afb56cf0d60-kube-api-access-tdhzj\") pod \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\" (UID: \"410c48f0-0679-4a1f-8fec-9afb56cf0d60\") " Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.075963 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410c48f0-0679-4a1f-8fec-9afb56cf0d60-config-volume" (OuterVolumeSpecName: "config-volume") pod "410c48f0-0679-4a1f-8fec-9afb56cf0d60" (UID: "410c48f0-0679-4a1f-8fec-9afb56cf0d60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.080918 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410c48f0-0679-4a1f-8fec-9afb56cf0d60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "410c48f0-0679-4a1f-8fec-9afb56cf0d60" (UID: "410c48f0-0679-4a1f-8fec-9afb56cf0d60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.081741 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410c48f0-0679-4a1f-8fec-9afb56cf0d60-kube-api-access-tdhzj" (OuterVolumeSpecName: "kube-api-access-tdhzj") pod "410c48f0-0679-4a1f-8fec-9afb56cf0d60" (UID: "410c48f0-0679-4a1f-8fec-9afb56cf0d60"). InnerVolumeSpecName "kube-api-access-tdhzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.176609 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410c48f0-0679-4a1f-8fec-9afb56cf0d60-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.177033 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410c48f0-0679-4a1f-8fec-9afb56cf0d60-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.177070 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdhzj\" (UniqueName: \"kubernetes.io/projected/410c48f0-0679-4a1f-8fec-9afb56cf0d60-kube-api-access-tdhzj\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.729073 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" event={"ID":"410c48f0-0679-4a1f-8fec-9afb56cf0d60","Type":"ContainerDied","Data":"1a0f035e71edc45176b1af4c18be381fe33165e67b7b80075641f058b78f52c4"} Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.729130 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0f035e71edc45176b1af4c18be381fe33165e67b7b80075641f058b78f52c4" Jan 31 07:45:03 crc kubenswrapper[4826]: I0131 07:45:03.729146 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b" Jan 31 07:45:27 crc kubenswrapper[4826]: I0131 07:45:27.377392 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:45:27 crc kubenswrapper[4826]: I0131 07:45:27.379262 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.508961 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m"] Jan 31 07:45:38 crc kubenswrapper[4826]: E0131 07:45:38.510799 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410c48f0-0679-4a1f-8fec-9afb56cf0d60" containerName="collect-profiles" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.517143 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="410c48f0-0679-4a1f-8fec-9afb56cf0d60" containerName="collect-profiles" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.517365 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="410c48f0-0679-4a1f-8fec-9afb56cf0d60" containerName="collect-profiles" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.517826 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.523101 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-k2l8w" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.523820 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.523954 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.524743 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m"] Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.560846 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-jk7fq"] Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.561711 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m7jsx"] Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.562202 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.562595 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jk7fq" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.564853 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57pc\" (UniqueName: \"kubernetes.io/projected/096ce775-64cb-4654-9853-b989068756fb-kube-api-access-r57pc\") pod \"cert-manager-cainjector-cf98fcc89-h6q4m\" (UID: \"096ce775-64cb-4654-9853-b989068756fb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.564911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpx5\" (UniqueName: \"kubernetes.io/projected/32b18e30-bc25-4bf7-8297-7fb8af9262f1-kube-api-access-5jpx5\") pod \"cert-manager-858654f9db-jk7fq\" (UID: \"32b18e30-bc25-4bf7-8297-7fb8af9262f1\") " pod="cert-manager/cert-manager-858654f9db-jk7fq" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.564944 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrlt\" (UniqueName: \"kubernetes.io/projected/bf4d0c6e-9e50-45df-bceb-12a9f8b1b908-kube-api-access-7xrlt\") pod \"cert-manager-webhook-687f57d79b-m7jsx\" (UID: \"bf4d0c6e-9e50-45df-bceb-12a9f8b1b908\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.565233 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nfxpd" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.565378 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cggq8" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.572882 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jk7fq"] Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.579489 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m7jsx"] Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.665806 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57pc\" (UniqueName: \"kubernetes.io/projected/096ce775-64cb-4654-9853-b989068756fb-kube-api-access-r57pc\") pod \"cert-manager-cainjector-cf98fcc89-h6q4m\" (UID: \"096ce775-64cb-4654-9853-b989068756fb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.665876 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jpx5\" (UniqueName: \"kubernetes.io/projected/32b18e30-bc25-4bf7-8297-7fb8af9262f1-kube-api-access-5jpx5\") pod \"cert-manager-858654f9db-jk7fq\" (UID: \"32b18e30-bc25-4bf7-8297-7fb8af9262f1\") " pod="cert-manager/cert-manager-858654f9db-jk7fq" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.665921 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrlt\" (UniqueName: \"kubernetes.io/projected/bf4d0c6e-9e50-45df-bceb-12a9f8b1b908-kube-api-access-7xrlt\") pod \"cert-manager-webhook-687f57d79b-m7jsx\" (UID: \"bf4d0c6e-9e50-45df-bceb-12a9f8b1b908\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.693876 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrlt\" (UniqueName: \"kubernetes.io/projected/bf4d0c6e-9e50-45df-bceb-12a9f8b1b908-kube-api-access-7xrlt\") pod \"cert-manager-webhook-687f57d79b-m7jsx\" (UID: \"bf4d0c6e-9e50-45df-bceb-12a9f8b1b908\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.693894 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57pc\" (UniqueName: \"kubernetes.io/projected/096ce775-64cb-4654-9853-b989068756fb-kube-api-access-r57pc\") pod \"cert-manager-cainjector-cf98fcc89-h6q4m\" (UID: \"096ce775-64cb-4654-9853-b989068756fb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.693946 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jpx5\" (UniqueName: \"kubernetes.io/projected/32b18e30-bc25-4bf7-8297-7fb8af9262f1-kube-api-access-5jpx5\") pod \"cert-manager-858654f9db-jk7fq\" (UID: \"32b18e30-bc25-4bf7-8297-7fb8af9262f1\") " pod="cert-manager/cert-manager-858654f9db-jk7fq" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.840854 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.882774 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:38 crc kubenswrapper[4826]: I0131 07:45:38.890069 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jk7fq" Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.089784 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m"] Jan 31 07:45:39 crc kubenswrapper[4826]: W0131 07:45:39.099287 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096ce775_64cb_4654_9853_b989068756fb.slice/crio-f84bfe99c7ef2ebc0aca162b336fc4b4f2d2d40f2314da95494309742944ada3 WatchSource:0}: Error finding container f84bfe99c7ef2ebc0aca162b336fc4b4f2d2d40f2314da95494309742944ada3: Status 404 returned error can't find the container with id f84bfe99c7ef2ebc0aca162b336fc4b4f2d2d40f2314da95494309742944ada3 Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.103954 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.128728 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jk7fq"] Jan 31 07:45:39 crc kubenswrapper[4826]: W0131 07:45:39.148753 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32b18e30_bc25_4bf7_8297_7fb8af9262f1.slice/crio-3ebda69fd6e233380115777485fc07e2681bcfab957c64865ff0bba3f394ad6e WatchSource:0}: Error finding container 3ebda69fd6e233380115777485fc07e2681bcfab957c64865ff0bba3f394ad6e: Status 404 returned error can't find the container with id 3ebda69fd6e233380115777485fc07e2681bcfab957c64865ff0bba3f394ad6e Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.155811 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m7jsx"] Jan 31 07:45:39 crc kubenswrapper[4826]: W0131 07:45:39.169786 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4d0c6e_9e50_45df_bceb_12a9f8b1b908.slice/crio-a4bb91008426bc90b856abd5e4df7f61e61d485ffba52caa74a7e498273f08e3 WatchSource:0}: Error finding container a4bb91008426bc90b856abd5e4df7f61e61d485ffba52caa74a7e498273f08e3: Status 404 returned error can't find the container with id a4bb91008426bc90b856abd5e4df7f61e61d485ffba52caa74a7e498273f08e3 Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.967705 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jk7fq" event={"ID":"32b18e30-bc25-4bf7-8297-7fb8af9262f1","Type":"ContainerStarted","Data":"3ebda69fd6e233380115777485fc07e2681bcfab957c64865ff0bba3f394ad6e"} Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.968683 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" event={"ID":"bf4d0c6e-9e50-45df-bceb-12a9f8b1b908","Type":"ContainerStarted","Data":"a4bb91008426bc90b856abd5e4df7f61e61d485ffba52caa74a7e498273f08e3"} Jan 31 07:45:39 crc kubenswrapper[4826]: I0131 07:45:39.969551 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" event={"ID":"096ce775-64cb-4654-9853-b989068756fb","Type":"ContainerStarted","Data":"f84bfe99c7ef2ebc0aca162b336fc4b4f2d2d40f2314da95494309742944ada3"} Jan 31 07:45:42 crc kubenswrapper[4826]: I0131 07:45:42.986920 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" event={"ID":"bf4d0c6e-9e50-45df-bceb-12a9f8b1b908","Type":"ContainerStarted","Data":"c91072eff9f14a84061d50da67dc045c0dc9e3e3af1bc0ceab6ff1e57d41d5ed"} Jan 31 07:45:42 crc kubenswrapper[4826]: I0131 07:45:42.987214 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:43 crc kubenswrapper[4826]: I0131 07:45:43.003399 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" podStartSLOduration=1.773683098 podStartE2EDuration="5.003382229s" podCreationTimestamp="2026-01-31 07:45:38 +0000 UTC" firstStartedPulling="2026-01-31 07:45:39.17161727 +0000 UTC m=+571.025503629" lastFinishedPulling="2026-01-31 07:45:42.401316391 +0000 UTC m=+574.255202760" observedRunningTime="2026-01-31 07:45:43.003359019 +0000 UTC m=+574.857245388" watchObservedRunningTime="2026-01-31 07:45:43.003382229 +0000 UTC m=+574.857268588" Jan 31 07:45:45 crc kubenswrapper[4826]: I0131 07:45:45.001832 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" event={"ID":"096ce775-64cb-4654-9853-b989068756fb","Type":"ContainerStarted","Data":"8e63bfed0956ad362e22139cc58af70a09fd7de07f99bcc37c843c61e7201781"} Jan 31 07:45:45 crc kubenswrapper[4826]: I0131 07:45:45.003543 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jk7fq" event={"ID":"32b18e30-bc25-4bf7-8297-7fb8af9262f1","Type":"ContainerStarted","Data":"322543b87c6f234c04c3d6a5ddccf1449343062fb41c64c485c8d8aa619d5c86"} Jan 31 07:45:45 crc kubenswrapper[4826]: I0131 07:45:45.022158 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h6q4m" podStartSLOduration=2.058606318 podStartE2EDuration="7.022135515s" podCreationTimestamp="2026-01-31 07:45:38 +0000 UTC" firstStartedPulling="2026-01-31 07:45:39.103693335 +0000 UTC m=+570.957579694" lastFinishedPulling="2026-01-31 07:45:44.067222522 +0000 UTC m=+575.921108891" observedRunningTime="2026-01-31 07:45:45.017051313 +0000 UTC m=+576.870937682" watchObservedRunningTime="2026-01-31 07:45:45.022135515 +0000 UTC m=+576.876021894" Jan 31 07:45:45 crc kubenswrapper[4826]: I0131 07:45:45.043883 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-jk7fq" podStartSLOduration=2.122895571 podStartE2EDuration="7.043859161s" podCreationTimestamp="2026-01-31 07:45:38 +0000 UTC" firstStartedPulling="2026-01-31 07:45:39.153069682 +0000 UTC m=+571.006956031" lastFinishedPulling="2026-01-31 07:45:44.074033242 +0000 UTC m=+575.927919621" observedRunningTime="2026-01-31 07:45:45.040193819 +0000 UTC m=+576.894080228" watchObservedRunningTime="2026-01-31 07:45:45.043859161 +0000 UTC m=+576.897745530" Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.934382 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvwnb"] Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.935223 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-controller" containerID="cri-o://0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.935503 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="northd" containerID="cri-o://0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.935606 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-acl-logging" containerID="cri-o://25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.935706 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-node" containerID="cri-o://bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.935864 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="sbdb" containerID="cri-o://7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.935842 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.936095 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="nbdb" containerID="cri-o://14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" gracePeriod=30 Jan 31 07:45:47 crc kubenswrapper[4826]: I0131 07:45:47.992546 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" containerID="cri-o://b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" gracePeriod=30 Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.264631 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/3.log" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.267928 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovn-acl-logging/0.log" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.268519 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovn-controller/0.log" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.269080 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.334869 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-netns\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.334926 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2rbf5"] Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.334931 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-ovn\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335164 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-systemd-units\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335045 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335119 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335213 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-env-overrides\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335250 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5db04412-b62f-417c-91ac-776767d6102f-ovn-node-metrics-cert\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335320 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335352 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335359 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-node-log" (OuterVolumeSpecName: "node-log") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335365 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-acl-logging" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335410 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-acl-logging" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335461 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="northd" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335471 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="northd" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335493 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335504 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335519 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-node" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335528 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-node" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335538 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="sbdb" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335547 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="sbdb" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335578 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335589 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335613 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335623 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335635 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335328 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-node-log\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335699 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-systemd\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335644 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335770 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-openvswitch\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335780 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kubecfg-setup" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335799 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kubecfg-setup" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335806 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-var-lib-openvswitch\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335847 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvlrl\" (UniqueName: \"kubernetes.io/projected/5db04412-b62f-417c-91ac-776767d6102f-kube-api-access-wvlrl\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.335864 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="nbdb" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335880 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="nbdb" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335879 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335889 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-config\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.335953 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-netd\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336026 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-slash\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336071 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-script-lib\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336122 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336133 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-bin\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336150 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336167 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-log-socket\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336206 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-kubelet\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336211 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336267 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-etc-openvswitch\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336308 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-ovn-kubernetes\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336373 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5db04412-b62f-417c-91ac-776767d6102f\" (UID: \"5db04412-b62f-417c-91ac-776767d6102f\") " Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337018 4826 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337049 4826 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337073 4826 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337132 4826 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337150 4826 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337169 4826 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336171 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337232 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337263 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="kube-rbac-proxy-node" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336269 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336438 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336485 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-log-socket" (OuterVolumeSpecName: "log-socket") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337315 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336511 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.336537 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-slash" (OuterVolumeSpecName: "host-slash") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337153 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337194 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337231 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337267 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337258 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337279 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337655 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="northd" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337673 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="sbdb" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337687 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337704 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="nbdb" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.337734 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovn-acl-logging" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.338080 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.338096 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.338256 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: E0131 07:45:48.338382 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.338393 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db04412-b62f-417c-91ac-776767d6102f" containerName="ovnkube-controller" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.341376 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.346194 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db04412-b62f-417c-91ac-776767d6102f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.346544 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db04412-b62f-417c-91ac-776767d6102f-kube-api-access-wvlrl" (OuterVolumeSpecName: "kube-api-access-wvlrl") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "kube-api-access-wvlrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.359660 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5db04412-b62f-417c-91ac-776767d6102f" (UID: "5db04412-b62f-417c-91ac-776767d6102f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.438717 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-systemd-units\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.438777 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-slash\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.438800 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-node-log\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.438837 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439182 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-var-lib-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439269 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v74t8\" (UniqueName: \"kubernetes.io/projected/8809e8b2-07e6-4928-a791-72fa6ce24550-kube-api-access-v74t8\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439302 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-ovnkube-config\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439328 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-cni-netd\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439362 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-run-ovn-kubernetes\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-systemd\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439440 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-run-netns\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439487 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-log-socket\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439507 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-env-overrides\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439534 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8809e8b2-07e6-4928-a791-72fa6ce24550-ovn-node-metrics-cert\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439554 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-cni-bin\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.439574 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-etc-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440458 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-ovnkube-script-lib\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440494 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-kubelet\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440519 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-ovn\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440548 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440615 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5db04412-b62f-417c-91ac-776767d6102f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440631 4826 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440644 4826 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440657 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvlrl\" (UniqueName: \"kubernetes.io/projected/5db04412-b62f-417c-91ac-776767d6102f-kube-api-access-wvlrl\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440671 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440683 4826 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440694 4826 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440705 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5db04412-b62f-417c-91ac-776767d6102f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440717 4826 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440728 4826 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440740 4826 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440753 4826 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440766 4826 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.440779 4826 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5db04412-b62f-417c-91ac-776767d6102f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542032 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v74t8\" (UniqueName: \"kubernetes.io/projected/8809e8b2-07e6-4928-a791-72fa6ce24550-kube-api-access-v74t8\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542689 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-ovnkube-config\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542752 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-cni-netd\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542807 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-run-ovn-kubernetes\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542869 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-cni-netd\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542875 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-systemd\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542943 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-systemd\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543000 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-run-netns\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.542949 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-run-netns\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543046 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-run-ovn-kubernetes\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543084 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-log-socket\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-env-overrides\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543190 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8809e8b2-07e6-4928-a791-72fa6ce24550-ovn-node-metrics-cert\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543231 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-log-socket\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543240 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-cni-bin\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543292 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-etc-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543378 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-ovnkube-script-lib\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543424 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-kubelet\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543476 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-ovn\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543527 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543578 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-systemd-units\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543625 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-slash\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543666 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-node-log\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543732 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543791 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-var-lib-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543864 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-env-overrides\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543926 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-ovn\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.543923 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-var-lib-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544006 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-ovnkube-config\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544008 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-systemd-units\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-etc-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544135 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-cni-bin\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544173 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-node-log\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544226 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-slash\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544227 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-run-openvswitch\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544273 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8809e8b2-07e6-4928-a791-72fa6ce24550-host-kubelet\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.544810 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8809e8b2-07e6-4928-a791-72fa6ce24550-ovnkube-script-lib\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.551069 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8809e8b2-07e6-4928-a791-72fa6ce24550-ovn-node-metrics-cert\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.575258 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v74t8\" (UniqueName: \"kubernetes.io/projected/8809e8b2-07e6-4928-a791-72fa6ce24550-kube-api-access-v74t8\") pod \"ovnkube-node-2rbf5\" (UID: \"8809e8b2-07e6-4928-a791-72fa6ce24550\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.654837 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:48 crc kubenswrapper[4826]: I0131 07:45:48.897762 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-m7jsx" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.033466 4826 generic.go:334] "Generic (PLEG): container finished" podID="8809e8b2-07e6-4928-a791-72fa6ce24550" containerID="28cb7b9633550f71e1a457763e0f5b99b11c0d054b8060f84bbb3432aa7d46c9" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.033550 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerDied","Data":"28cb7b9633550f71e1a457763e0f5b99b11c0d054b8060f84bbb3432aa7d46c9"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.033577 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"d72aa57f703e0a8e6283e3e7e79d49a8d68fe94fb24c10b439f4515a5524d7f5"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.037051 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovnkube-controller/3.log" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.040537 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovn-acl-logging/0.log" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041053 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvwnb_5db04412-b62f-417c-91ac-776767d6102f/ovn-controller/0.log" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041714 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041742 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041763 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041776 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041793 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041805 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" exitCode=0 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041819 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" exitCode=143 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041832 4826 generic.go:334] "Generic (PLEG): container finished" podID="5db04412-b62f-417c-91ac-776767d6102f" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" exitCode=143 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041918 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.041990 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042026 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042062 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042092 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042116 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042144 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042168 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042192 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042208 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042223 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042197 4826 scope.go:117] "RemoveContainer" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042237 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042364 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042389 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042406 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042428 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042467 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042498 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042516 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042531 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042546 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042561 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042575 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042589 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042603 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042617 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042631 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042652 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042683 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042707 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042723 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042737 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042752 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042766 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042782 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042796 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042811 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042825 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042846 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvwnb" event={"ID":"5db04412-b62f-417c-91ac-776767d6102f","Type":"ContainerDied","Data":"6ce3aa4663864561f147e7b740f16c73f1b27265c6ec1a88d44c097a92983841"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042871 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042891 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042905 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042920 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042934 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042948 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.042961 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.043008 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.043023 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.043038 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.046062 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/2.log" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.046769 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/1.log" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.046854 4826 generic.go:334] "Generic (PLEG): container finished" podID="b672fd90-a70c-4f27-b711-e58f269efccd" containerID="382e0d20cd9d87cb099d8ef02da0a340a24566c6c856081d606e0ba9fa917d2c" exitCode=2 Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.046902 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerDied","Data":"382e0d20cd9d87cb099d8ef02da0a340a24566c6c856081d606e0ba9fa917d2c"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.046945 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815"} Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.047656 4826 scope.go:117] "RemoveContainer" containerID="382e0d20cd9d87cb099d8ef02da0a340a24566c6c856081d606e0ba9fa917d2c" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.048143 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wtbb9_openshift-multus(b672fd90-a70c-4f27-b711-e58f269efccd)\"" pod="openshift-multus/multus-wtbb9" podUID="b672fd90-a70c-4f27-b711-e58f269efccd" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.074278 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.106480 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvwnb"] Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.110615 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvwnb"] Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.145660 4826 scope.go:117] "RemoveContainer" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.181871 4826 scope.go:117] "RemoveContainer" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.209276 4826 scope.go:117] "RemoveContainer" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.223147 4826 scope.go:117] "RemoveContainer" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.244116 4826 scope.go:117] "RemoveContainer" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.271945 4826 scope.go:117] "RemoveContainer" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.300582 4826 scope.go:117] "RemoveContainer" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.320628 4826 scope.go:117] "RemoveContainer" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.340014 4826 scope.go:117] "RemoveContainer" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.340412 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": container with ID starting with b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3 not found: ID does not exist" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.340463 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} err="failed to get container status \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": rpc error: code = NotFound desc = could not find container \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": container with ID starting with b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.340496 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.340729 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": container with ID starting with d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33 not found: ID does not exist" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.340767 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} err="failed to get container status \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": rpc error: code = NotFound desc = could not find container \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": container with ID starting with d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.340789 4826 scope.go:117] "RemoveContainer" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.341054 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": container with ID starting with 7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6 not found: ID does not exist" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341088 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} err="failed to get container status \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": rpc error: code = NotFound desc = could not find container \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": container with ID starting with 7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341110 4826 scope.go:117] "RemoveContainer" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.341330 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": container with ID starting with 14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5 not found: ID does not exist" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341363 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} err="failed to get container status \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": rpc error: code = NotFound desc = could not find container \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": container with ID starting with 14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341385 4826 scope.go:117] "RemoveContainer" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.341594 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": container with ID starting with 0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d not found: ID does not exist" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341625 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} err="failed to get container status \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": rpc error: code = NotFound desc = could not find container \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": container with ID starting with 0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341647 4826 scope.go:117] "RemoveContainer" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.341852 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": container with ID starting with 7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e not found: ID does not exist" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341885 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} err="failed to get container status \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": rpc error: code = NotFound desc = could not find container \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": container with ID starting with 7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.341910 4826 scope.go:117] "RemoveContainer" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.342459 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": container with ID starting with bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b not found: ID does not exist" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.342523 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} err="failed to get container status \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": rpc error: code = NotFound desc = could not find container \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": container with ID starting with bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.342554 4826 scope.go:117] "RemoveContainer" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.342847 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": container with ID starting with 25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0 not found: ID does not exist" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.342886 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} err="failed to get container status \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": rpc error: code = NotFound desc = could not find container \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": container with ID starting with 25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.342912 4826 scope.go:117] "RemoveContainer" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.343214 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": container with ID starting with 0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2 not found: ID does not exist" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.343255 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} err="failed to get container status \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": rpc error: code = NotFound desc = could not find container \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": container with ID starting with 0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.343280 4826 scope.go:117] "RemoveContainer" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" Jan 31 07:45:49 crc kubenswrapper[4826]: E0131 07:45:49.343528 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": container with ID starting with 01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0 not found: ID does not exist" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.343558 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} err="failed to get container status \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": rpc error: code = NotFound desc = could not find container \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": container with ID starting with 01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.343577 4826 scope.go:117] "RemoveContainer" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.344109 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} err="failed to get container status \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": rpc error: code = NotFound desc = could not find container \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": container with ID starting with b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.344139 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.344483 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} err="failed to get container status \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": rpc error: code = NotFound desc = could not find container \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": container with ID starting with d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.344506 4826 scope.go:117] "RemoveContainer" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.344859 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} err="failed to get container status \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": rpc error: code = NotFound desc = could not find container \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": container with ID starting with 7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.344896 4826 scope.go:117] "RemoveContainer" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.345391 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} err="failed to get container status \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": rpc error: code = NotFound desc = could not find container \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": container with ID starting with 14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.345418 4826 scope.go:117] "RemoveContainer" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.345865 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} err="failed to get container status \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": rpc error: code = NotFound desc = could not find container \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": container with ID starting with 0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.345911 4826 scope.go:117] "RemoveContainer" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.346248 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} err="failed to get container status \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": rpc error: code = NotFound desc = could not find container \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": container with ID starting with 7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.346268 4826 scope.go:117] "RemoveContainer" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.346606 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} err="failed to get container status \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": rpc error: code = NotFound desc = could not find container \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": container with ID starting with bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.346622 4826 scope.go:117] "RemoveContainer" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.346878 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} err="failed to get container status \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": rpc error: code = NotFound desc = could not find container \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": container with ID starting with 25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.346908 4826 scope.go:117] "RemoveContainer" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347160 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} err="failed to get container status \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": rpc error: code = NotFound desc = could not find container \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": container with ID starting with 0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347184 4826 scope.go:117] "RemoveContainer" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347385 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} err="failed to get container status \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": rpc error: code = NotFound desc = could not find container \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": container with ID starting with 01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347411 4826 scope.go:117] "RemoveContainer" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347725 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} err="failed to get container status \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": rpc error: code = NotFound desc = could not find container \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": container with ID starting with b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347748 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.347985 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} err="failed to get container status \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": rpc error: code = NotFound desc = could not find container \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": container with ID starting with d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348013 4826 scope.go:117] "RemoveContainer" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348340 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} err="failed to get container status \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": rpc error: code = NotFound desc = could not find container \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": container with ID starting with 7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348362 4826 scope.go:117] "RemoveContainer" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348608 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} err="failed to get container status \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": rpc error: code = NotFound desc = could not find container \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": container with ID starting with 14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348628 4826 scope.go:117] "RemoveContainer" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348828 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} err="failed to get container status \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": rpc error: code = NotFound desc = could not find container \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": container with ID starting with 0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.348856 4826 scope.go:117] "RemoveContainer" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.349211 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} err="failed to get container status \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": rpc error: code = NotFound desc = could not find container \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": container with ID starting with 7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.349236 4826 scope.go:117] "RemoveContainer" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.349597 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} err="failed to get container status \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": rpc error: code = NotFound desc = could not find container \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": container with ID starting with bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.349632 4826 scope.go:117] "RemoveContainer" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.350231 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} err="failed to get container status \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": rpc error: code = NotFound desc = could not find container \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": container with ID starting with 25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.350266 4826 scope.go:117] "RemoveContainer" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.350531 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} err="failed to get container status \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": rpc error: code = NotFound desc = could not find container \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": container with ID starting with 0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.350564 4826 scope.go:117] "RemoveContainer" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.350778 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} err="failed to get container status \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": rpc error: code = NotFound desc = could not find container \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": container with ID starting with 01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.350805 4826 scope.go:117] "RemoveContainer" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351022 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} err="failed to get container status \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": rpc error: code = NotFound desc = could not find container \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": container with ID starting with b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351051 4826 scope.go:117] "RemoveContainer" containerID="d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351256 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33"} err="failed to get container status \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": rpc error: code = NotFound desc = could not find container \"d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33\": container with ID starting with d893aaee284ab112e56f303b51894196d333364ced205c5f2b1d55f1c21b5f33 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351280 4826 scope.go:117] "RemoveContainer" containerID="7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351466 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6"} err="failed to get container status \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": rpc error: code = NotFound desc = could not find container \"7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6\": container with ID starting with 7a7932c07eeab630a858246757632055efeebc48e24b48540e6eef0dd7f674a6 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351493 4826 scope.go:117] "RemoveContainer" containerID="14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351663 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5"} err="failed to get container status \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": rpc error: code = NotFound desc = could not find container \"14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5\": container with ID starting with 14e42bd94f4a5bad156816b6f37ba34eb4100c6aa4df284d39f6985295c8ffe5 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351689 4826 scope.go:117] "RemoveContainer" containerID="0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351952 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d"} err="failed to get container status \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": rpc error: code = NotFound desc = could not find container \"0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d\": container with ID starting with 0659f4702678bb19ab2aa88e41088cf17773436cc217765abfccdd12ebc9201d not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.351997 4826 scope.go:117] "RemoveContainer" containerID="7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.352271 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e"} err="failed to get container status \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": rpc error: code = NotFound desc = could not find container \"7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e\": container with ID starting with 7c680edd2b66f1a50639fc788125ecacca714c54d3f4373f00b93d10b7b2ba6e not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.352306 4826 scope.go:117] "RemoveContainer" containerID="bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.352848 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b"} err="failed to get container status \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": rpc error: code = NotFound desc = could not find container \"bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b\": container with ID starting with bd44dc2531b6a5384ee602578b7f10547c827819f22882ccecc9f082d1ef9b5b not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.352881 4826 scope.go:117] "RemoveContainer" containerID="25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353242 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0"} err="failed to get container status \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": rpc error: code = NotFound desc = could not find container \"25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0\": container with ID starting with 25913a2b4294cada25a75257e6f92450da1e972fbe39cc74afe71978ddd8c3e0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353263 4826 scope.go:117] "RemoveContainer" containerID="0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353464 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2"} err="failed to get container status \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": rpc error: code = NotFound desc = could not find container \"0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2\": container with ID starting with 0621856164ac3648058623b47e8b062c910012882aff244024b0bc5629d5e0f2 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353489 4826 scope.go:117] "RemoveContainer" containerID="01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353673 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0"} err="failed to get container status \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": rpc error: code = NotFound desc = could not find container \"01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0\": container with ID starting with 01ac52454718149875a4d4103931280bc829ad81652fcc61d6763136684af2a0 not found: ID does not exist" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353717 4826 scope.go:117] "RemoveContainer" containerID="b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3" Jan 31 07:45:49 crc kubenswrapper[4826]: I0131 07:45:49.353895 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3"} err="failed to get container status \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": rpc error: code = NotFound desc = could not find container \"b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3\": container with ID starting with b577d5f45da60ff25d1b43d36415de2516a94565e631bca63635001220e0bdf3 not found: ID does not exist" Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.066228 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"431e08c70a27a1ea744f141220136dcf753d74d04ee2b68a3f2a8b257ab65013"} Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.066318 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"fba2a72c2f70efbfe46e59be4f8e495709aa644544807e7280adabe48b2bcd37"} Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.066343 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"2d110fa81bb9eb6782765f7c29143e63ebd42659f3eaa990ea53592228e10f7b"} Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.066362 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"b588c010a7c5a770626307520e29d9b2f6806820b5f9f6809a73c72597318618"} Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.066379 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"67eb6b472e8586db0ab89edb95c163dff69c0f38b1b4647cd104fa6c38ed9e8c"} Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.066494 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"565fc4e14157c4bda1ebecd55e13ac39cd58a2fb972a1ad404f7ab9fcb5c3aa6"} Jan 31 07:45:50 crc kubenswrapper[4826]: I0131 07:45:50.818681 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db04412-b62f-417c-91ac-776767d6102f" path="/var/lib/kubelet/pods/5db04412-b62f-417c-91ac-776767d6102f/volumes" Jan 31 07:45:53 crc kubenswrapper[4826]: I0131 07:45:53.092999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"1de10c113c7ebc7641327a041aed3fe51201f8248e0b2ff57d8a88e74bd27679"} Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.108305 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" event={"ID":"8809e8b2-07e6-4928-a791-72fa6ce24550","Type":"ContainerStarted","Data":"9741fd1da8f1c59e7d847d03b5f2fe21f4cb4651772c0369c797a48b48ba2b42"} Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.108610 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.108625 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.108759 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.140207 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.141915 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:45:55 crc kubenswrapper[4826]: I0131 07:45:55.153424 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" podStartSLOduration=7.153389465 podStartE2EDuration="7.153389465s" podCreationTimestamp="2026-01-31 07:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:45:55.145065617 +0000 UTC m=+586.998951996" watchObservedRunningTime="2026-01-31 07:45:55.153389465 +0000 UTC m=+587.007275844" Jan 31 07:45:57 crc kubenswrapper[4826]: I0131 07:45:57.377084 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:45:57 crc kubenswrapper[4826]: I0131 07:45:57.377482 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:45:59 crc kubenswrapper[4826]: I0131 07:45:59.809336 4826 scope.go:117] "RemoveContainer" containerID="382e0d20cd9d87cb099d8ef02da0a340a24566c6c856081d606e0ba9fa917d2c" Jan 31 07:45:59 crc kubenswrapper[4826]: E0131 07:45:59.809809 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wtbb9_openshift-multus(b672fd90-a70c-4f27-b711-e58f269efccd)\"" pod="openshift-multus/multus-wtbb9" podUID="b672fd90-a70c-4f27-b711-e58f269efccd" Jan 31 07:46:09 crc kubenswrapper[4826]: I0131 07:46:09.114097 4826 scope.go:117] "RemoveContainer" containerID="6c255151f7c302aaa5ba21f8490f0ed2f28e5388c2a332890bf1796341b6ebe3" Jan 31 07:46:09 crc kubenswrapper[4826]: I0131 07:46:09.141441 4826 scope.go:117] "RemoveContainer" containerID="0f290a2cfcec4d14d2b2c21f7c56558675b8b7171bf9d1895d89f632cb35b815" Jan 31 07:46:09 crc kubenswrapper[4826]: I0131 07:46:09.192309 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/2.log" Jan 31 07:46:14 crc kubenswrapper[4826]: I0131 07:46:14.808691 4826 scope.go:117] "RemoveContainer" containerID="382e0d20cd9d87cb099d8ef02da0a340a24566c6c856081d606e0ba9fa917d2c" Jan 31 07:46:15 crc kubenswrapper[4826]: I0131 07:46:15.227761 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wtbb9_b672fd90-a70c-4f27-b711-e58f269efccd/kube-multus/2.log" Jan 31 07:46:15 crc kubenswrapper[4826]: I0131 07:46:15.227847 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wtbb9" event={"ID":"b672fd90-a70c-4f27-b711-e58f269efccd","Type":"ContainerStarted","Data":"7874847edff9c5fb0aaaa1f2716d81b0ec0a59566cc3a9928964d178343d8338"} Jan 31 07:46:18 crc kubenswrapper[4826]: I0131 07:46:18.679380 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2rbf5" Jan 31 07:46:27 crc kubenswrapper[4826]: I0131 07:46:27.376828 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:46:27 crc kubenswrapper[4826]: I0131 07:46:27.377277 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:46:27 crc kubenswrapper[4826]: I0131 07:46:27.377316 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:46:27 crc kubenswrapper[4826]: I0131 07:46:27.377814 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96981d99c8cc310fa2d207d4a923209b26ac15505f9c721483f2cdae8f82dca1"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:46:27 crc kubenswrapper[4826]: I0131 07:46:27.377866 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://96981d99c8cc310fa2d207d4a923209b26ac15505f9c721483f2cdae8f82dca1" gracePeriod=600 Jan 31 07:46:28 crc kubenswrapper[4826]: I0131 07:46:28.311460 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="96981d99c8cc310fa2d207d4a923209b26ac15505f9c721483f2cdae8f82dca1" exitCode=0 Jan 31 07:46:28 crc kubenswrapper[4826]: I0131 07:46:28.312311 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"96981d99c8cc310fa2d207d4a923209b26ac15505f9c721483f2cdae8f82dca1"} Jan 31 07:46:28 crc kubenswrapper[4826]: I0131 07:46:28.312383 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"9f90fb78fab9497ee3e1bd264894acb4bbed634bf52113ea4ba5640cbade7719"} Jan 31 07:46:28 crc kubenswrapper[4826]: I0131 07:46:28.312414 4826 scope.go:117] "RemoveContainer" containerID="f565241d95f4e9819f53c9108aab47ac5428c731551db742e200ccf23cd4ae76" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.766251 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86"] Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.768455 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.771004 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.827000 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.827074 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.827102 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7sjw\" (UniqueName: \"kubernetes.io/projected/417dc312-217b-4eaf-9b1f-a4145c73f920-kube-api-access-f7sjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.834255 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86"] Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.928331 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7sjw\" (UniqueName: \"kubernetes.io/projected/417dc312-217b-4eaf-9b1f-a4145c73f920-kube-api-access-f7sjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.928406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.928645 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.929847 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.929927 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:32 crc kubenswrapper[4826]: I0131 07:46:32.947008 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7sjw\" (UniqueName: \"kubernetes.io/projected/417dc312-217b-4eaf-9b1f-a4145c73f920-kube-api-access-f7sjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:33 crc kubenswrapper[4826]: I0131 07:46:33.139439 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:33 crc kubenswrapper[4826]: I0131 07:46:33.381853 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86"] Jan 31 07:46:34 crc kubenswrapper[4826]: I0131 07:46:34.353533 4826 generic.go:334] "Generic (PLEG): container finished" podID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerID="4da0a360fdbafd6b0f2d205f6324889232d0f09d71fecf99cc3b2f1e4e8c46cd" exitCode=0 Jan 31 07:46:34 crc kubenswrapper[4826]: I0131 07:46:34.353629 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" event={"ID":"417dc312-217b-4eaf-9b1f-a4145c73f920","Type":"ContainerDied","Data":"4da0a360fdbafd6b0f2d205f6324889232d0f09d71fecf99cc3b2f1e4e8c46cd"} Jan 31 07:46:34 crc kubenswrapper[4826]: I0131 07:46:34.353907 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" event={"ID":"417dc312-217b-4eaf-9b1f-a4145c73f920","Type":"ContainerStarted","Data":"482826be5d3149c8820afac9392481e397a60bb52796be3aeb96e4b9f5b8926c"} Jan 31 07:46:36 crc kubenswrapper[4826]: I0131 07:46:36.365803 4826 generic.go:334] "Generic (PLEG): container finished" podID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerID="8b282e0b918d9bbf646d2f68ab1bd43f319af246e73d6f6df15301952d1c8044" exitCode=0 Jan 31 07:46:36 crc kubenswrapper[4826]: I0131 07:46:36.365867 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" event={"ID":"417dc312-217b-4eaf-9b1f-a4145c73f920","Type":"ContainerDied","Data":"8b282e0b918d9bbf646d2f68ab1bd43f319af246e73d6f6df15301952d1c8044"} Jan 31 07:46:37 crc kubenswrapper[4826]: I0131 07:46:37.375013 4826 generic.go:334] "Generic (PLEG): container finished" podID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerID="adcfe731d40c9cb1d91e68edca1630243ade5c5266efdbbad720db38a2f2c0f9" exitCode=0 Jan 31 07:46:37 crc kubenswrapper[4826]: I0131 07:46:37.375067 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" event={"ID":"417dc312-217b-4eaf-9b1f-a4145c73f920","Type":"ContainerDied","Data":"adcfe731d40c9cb1d91e68edca1630243ade5c5266efdbbad720db38a2f2c0f9"} Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.692848 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.745688 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7sjw\" (UniqueName: \"kubernetes.io/projected/417dc312-217b-4eaf-9b1f-a4145c73f920-kube-api-access-f7sjw\") pod \"417dc312-217b-4eaf-9b1f-a4145c73f920\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.745820 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-util\") pod \"417dc312-217b-4eaf-9b1f-a4145c73f920\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.746006 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-bundle\") pod \"417dc312-217b-4eaf-9b1f-a4145c73f920\" (UID: \"417dc312-217b-4eaf-9b1f-a4145c73f920\") " Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.746790 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-bundle" (OuterVolumeSpecName: "bundle") pod "417dc312-217b-4eaf-9b1f-a4145c73f920" (UID: "417dc312-217b-4eaf-9b1f-a4145c73f920"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.757432 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417dc312-217b-4eaf-9b1f-a4145c73f920-kube-api-access-f7sjw" (OuterVolumeSpecName: "kube-api-access-f7sjw") pod "417dc312-217b-4eaf-9b1f-a4145c73f920" (UID: "417dc312-217b-4eaf-9b1f-a4145c73f920"). InnerVolumeSpecName "kube-api-access-f7sjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.847703 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:38 crc kubenswrapper[4826]: I0131 07:46:38.847840 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7sjw\" (UniqueName: \"kubernetes.io/projected/417dc312-217b-4eaf-9b1f-a4145c73f920-kube-api-access-f7sjw\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:39 crc kubenswrapper[4826]: I0131 07:46:39.392461 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" event={"ID":"417dc312-217b-4eaf-9b1f-a4145c73f920","Type":"ContainerDied","Data":"482826be5d3149c8820afac9392481e397a60bb52796be3aeb96e4b9f5b8926c"} Jan 31 07:46:39 crc kubenswrapper[4826]: I0131 07:46:39.392527 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482826be5d3149c8820afac9392481e397a60bb52796be3aeb96e4b9f5b8926c" Jan 31 07:46:39 crc kubenswrapper[4826]: I0131 07:46:39.392566 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86" Jan 31 07:46:39 crc kubenswrapper[4826]: I0131 07:46:39.408089 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-util" (OuterVolumeSpecName: "util") pod "417dc312-217b-4eaf-9b1f-a4145c73f920" (UID: "417dc312-217b-4eaf-9b1f-a4145c73f920"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:46:39 crc kubenswrapper[4826]: I0131 07:46:39.454721 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/417dc312-217b-4eaf-9b1f-a4145c73f920-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.366518 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sqg84"] Jan 31 07:46:44 crc kubenswrapper[4826]: E0131 07:46:44.367494 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="pull" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.367515 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="pull" Jan 31 07:46:44 crc kubenswrapper[4826]: E0131 07:46:44.367531 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="util" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.367541 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="util" Jan 31 07:46:44 crc kubenswrapper[4826]: E0131 07:46:44.367562 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="extract" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.367573 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="extract" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.367720 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="417dc312-217b-4eaf-9b1f-a4145c73f920" containerName="extract" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.368377 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.370814 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jvz8r" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.370831 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.370886 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.374679 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sqg84"] Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.522252 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsf4g\" (UniqueName: \"kubernetes.io/projected/9ee973e5-15c8-45b0-80b2-66e250ef5275-kube-api-access-vsf4g\") pod \"nmstate-operator-646758c888-sqg84\" (UID: \"9ee973e5-15c8-45b0-80b2-66e250ef5275\") " pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.623550 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsf4g\" (UniqueName: \"kubernetes.io/projected/9ee973e5-15c8-45b0-80b2-66e250ef5275-kube-api-access-vsf4g\") pod \"nmstate-operator-646758c888-sqg84\" (UID: \"9ee973e5-15c8-45b0-80b2-66e250ef5275\") " pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.639884 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsf4g\" (UniqueName: \"kubernetes.io/projected/9ee973e5-15c8-45b0-80b2-66e250ef5275-kube-api-access-vsf4g\") pod \"nmstate-operator-646758c888-sqg84\" (UID: \"9ee973e5-15c8-45b0-80b2-66e250ef5275\") " pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.681773 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" Jan 31 07:46:44 crc kubenswrapper[4826]: I0131 07:46:44.887157 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sqg84"] Jan 31 07:46:44 crc kubenswrapper[4826]: W0131 07:46:44.891712 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ee973e5_15c8_45b0_80b2_66e250ef5275.slice/crio-0505b0f670f3984378432296c2162dc667076a84e9ca63de0873d3eb8e74b968 WatchSource:0}: Error finding container 0505b0f670f3984378432296c2162dc667076a84e9ca63de0873d3eb8e74b968: Status 404 returned error can't find the container with id 0505b0f670f3984378432296c2162dc667076a84e9ca63de0873d3eb8e74b968 Jan 31 07:46:45 crc kubenswrapper[4826]: I0131 07:46:45.431125 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" event={"ID":"9ee973e5-15c8-45b0-80b2-66e250ef5275","Type":"ContainerStarted","Data":"0505b0f670f3984378432296c2162dc667076a84e9ca63de0873d3eb8e74b968"} Jan 31 07:46:47 crc kubenswrapper[4826]: I0131 07:46:47.446789 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" event={"ID":"9ee973e5-15c8-45b0-80b2-66e250ef5275","Type":"ContainerStarted","Data":"5118d0ed23d68660256a15b486e94c60193dfa25340b604d39fc1bf2b0495457"} Jan 31 07:46:47 crc kubenswrapper[4826]: I0131 07:46:47.478031 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-sqg84" podStartSLOduration=1.7394333990000002 podStartE2EDuration="3.478010258s" podCreationTimestamp="2026-01-31 07:46:44 +0000 UTC" firstStartedPulling="2026-01-31 07:46:44.893729023 +0000 UTC m=+636.747615382" lastFinishedPulling="2026-01-31 07:46:46.632305872 +0000 UTC m=+638.486192241" observedRunningTime="2026-01-31 07:46:47.473698444 +0000 UTC m=+639.327584833" watchObservedRunningTime="2026-01-31 07:46:47.478010258 +0000 UTC m=+639.331896647" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.031803 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djs56"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.033714 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.037441 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ml6t4" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.052843 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.054319 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.066541 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.090842 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djs56"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.098529 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2cq5c"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.099594 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.115186 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-nmstate-lock\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-ovs-socket\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132606 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z247\" (UniqueName: \"kubernetes.io/projected/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-kube-api-access-9z247\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132639 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-dbus-socket\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132718 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2tp\" (UniqueName: \"kubernetes.io/projected/a87f432f-3723-4583-9e46-88b0fd950be3-kube-api-access-pr2tp\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.132810 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sskr\" (UniqueName: \"kubernetes.io/projected/7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f-kube-api-access-5sskr\") pod \"nmstate-metrics-54757c584b-djs56\" (UID: \"7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.157673 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.158334 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.161946 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-clp99" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.162270 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.163956 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.170599 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.233864 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfptj\" (UniqueName: \"kubernetes.io/projected/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-kube-api-access-jfptj\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.233938 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-ovs-socket\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-ovs-socket\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234099 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z247\" (UniqueName: \"kubernetes.io/projected/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-kube-api-access-9z247\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234142 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-dbus-socket\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: E0131 07:46:53.234160 4826 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2tp\" (UniqueName: \"kubernetes.io/projected/a87f432f-3723-4583-9e46-88b0fd950be3-kube-api-access-pr2tp\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: E0131 07:46:53.234221 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-tls-key-pair podName:4512b6ab-53d0-435f-bdfc-5f28ba454fd6 nodeName:}" failed. No retries permitted until 2026-01-31 07:46:53.734197348 +0000 UTC m=+645.588083717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-b5djj" (UID: "4512b6ab-53d0-435f-bdfc-5f28ba454fd6") : secret "openshift-nmstate-webhook" not found Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234246 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234289 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sskr\" (UniqueName: \"kubernetes.io/projected/7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f-kube-api-access-5sskr\") pod \"nmstate-metrics-54757c584b-djs56\" (UID: \"7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234327 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-nmstate-lock\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234464 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-nmstate-lock\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.234507 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a87f432f-3723-4583-9e46-88b0fd950be3-dbus-socket\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.253042 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2tp\" (UniqueName: \"kubernetes.io/projected/a87f432f-3723-4583-9e46-88b0fd950be3-kube-api-access-pr2tp\") pod \"nmstate-handler-2cq5c\" (UID: \"a87f432f-3723-4583-9e46-88b0fd950be3\") " pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.254138 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sskr\" (UniqueName: \"kubernetes.io/projected/7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f-kube-api-access-5sskr\") pod \"nmstate-metrics-54757c584b-djs56\" (UID: \"7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.254699 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z247\" (UniqueName: \"kubernetes.io/projected/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-kube-api-access-9z247\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.335295 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.335362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfptj\" (UniqueName: \"kubernetes.io/projected/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-kube-api-access-jfptj\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.335418 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: E0131 07:46:53.335467 4826 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 31 07:46:53 crc kubenswrapper[4826]: E0131 07:46:53.335538 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-plugin-serving-cert podName:ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1 nodeName:}" failed. No retries permitted until 2026-01-31 07:46:53.835505467 +0000 UTC m=+645.689391826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-778w9" (UID: "ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1") : secret "plugin-serving-cert" not found Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.336247 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.349848 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-799976d686-l6lw6"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.350767 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.357071 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.362962 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-799976d686-l6lw6"] Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.365344 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfptj\" (UniqueName: \"kubernetes.io/projected/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-kube-api-access-jfptj\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.413512 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.491009 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2cq5c" event={"ID":"a87f432f-3723-4583-9e46-88b0fd950be3","Type":"ContainerStarted","Data":"6206741acc0403392c5862b53363491c968bee7dc70b9e0c8785c5340422d246"} Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-oauth-serving-cert\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537141 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-serving-cert\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537163 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-service-ca\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-config\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537244 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-trusted-ca-bundle\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537265 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-oauth-config\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.537333 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k2z9\" (UniqueName: \"kubernetes.io/projected/fd677738-8b0d-4de3-a10e-754a24bf4f9d-kube-api-access-2k2z9\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.583048 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djs56"] Jan 31 07:46:53 crc kubenswrapper[4826]: W0131 07:46:53.587596 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7920d8f0_e7dd_4f3b_aa42_5301bf4ffa3f.slice/crio-56bd7836f8e0b553a3c63a9265c61be4a711eab11f1cc1f2f13116b5e0b3451d WatchSource:0}: Error finding container 56bd7836f8e0b553a3c63a9265c61be4a711eab11f1cc1f2f13116b5e0b3451d: Status 404 returned error can't find the container with id 56bd7836f8e0b553a3c63a9265c61be4a711eab11f1cc1f2f13116b5e0b3451d Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638069 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k2z9\" (UniqueName: \"kubernetes.io/projected/fd677738-8b0d-4de3-a10e-754a24bf4f9d-kube-api-access-2k2z9\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638123 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-oauth-serving-cert\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638156 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-serving-cert\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638175 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-service-ca\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638202 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-config\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638220 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-trusted-ca-bundle\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.638238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-oauth-config\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.639334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-config\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.639408 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-trusted-ca-bundle\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.639653 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-service-ca\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.640423 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd677738-8b0d-4de3-a10e-754a24bf4f9d-oauth-serving-cert\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.642804 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-oauth-config\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.643393 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd677738-8b0d-4de3-a10e-754a24bf4f9d-console-serving-cert\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.658638 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k2z9\" (UniqueName: \"kubernetes.io/projected/fd677738-8b0d-4de3-a10e-754a24bf4f9d-kube-api-access-2k2z9\") pod \"console-799976d686-l6lw6\" (UID: \"fd677738-8b0d-4de3-a10e-754a24bf4f9d\") " pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.696556 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.738847 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.743708 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4512b6ab-53d0-435f-bdfc-5f28ba454fd6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-b5djj\" (UID: \"4512b6ab-53d0-435f-bdfc-5f28ba454fd6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.841154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.846230 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-778w9\" (UID: \"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.954291 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-799976d686-l6lw6"] Jan 31 07:46:53 crc kubenswrapper[4826]: W0131 07:46:53.960540 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd677738_8b0d_4de3_a10e_754a24bf4f9d.slice/crio-ca236781b05b7106e2e5db51bed66e2ecf068eb24cb05387243d9c09aca66eeb WatchSource:0}: Error finding container ca236781b05b7106e2e5db51bed66e2ecf068eb24cb05387243d9c09aca66eeb: Status 404 returned error can't find the container with id ca236781b05b7106e2e5db51bed66e2ecf068eb24cb05387243d9c09aca66eeb Jan 31 07:46:53 crc kubenswrapper[4826]: I0131 07:46:53.975900 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.071465 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.180882 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj"] Jan 31 07:46:54 crc kubenswrapper[4826]: W0131 07:46:54.190390 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4512b6ab_53d0_435f_bdfc_5f28ba454fd6.slice/crio-d44f37e7809c1071592358b1aea1eaefb05992d4b1e5e990c93dc54d862a1cfe WatchSource:0}: Error finding container d44f37e7809c1071592358b1aea1eaefb05992d4b1e5e990c93dc54d862a1cfe: Status 404 returned error can't find the container with id d44f37e7809c1071592358b1aea1eaefb05992d4b1e5e990c93dc54d862a1cfe Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.318832 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9"] Jan 31 07:46:54 crc kubenswrapper[4826]: W0131 07:46:54.322756 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccde3ca0_fa50_4a94_a1d3_5e9017e8cdf1.slice/crio-845d66e80510e8b6063de67dec8f15a3427b083dc012d518c3e4cb4c3340a1a8 WatchSource:0}: Error finding container 845d66e80510e8b6063de67dec8f15a3427b083dc012d518c3e4cb4c3340a1a8: Status 404 returned error can't find the container with id 845d66e80510e8b6063de67dec8f15a3427b083dc012d518c3e4cb4c3340a1a8 Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.496364 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" event={"ID":"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1","Type":"ContainerStarted","Data":"845d66e80510e8b6063de67dec8f15a3427b083dc012d518c3e4cb4c3340a1a8"} Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.497763 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-799976d686-l6lw6" event={"ID":"fd677738-8b0d-4de3-a10e-754a24bf4f9d","Type":"ContainerStarted","Data":"486789bb79b6aeb6f0c1ecf0b291721a2a3a1bb2837037d61ecbba15dcf23aee"} Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.497809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-799976d686-l6lw6" event={"ID":"fd677738-8b0d-4de3-a10e-754a24bf4f9d","Type":"ContainerStarted","Data":"ca236781b05b7106e2e5db51bed66e2ecf068eb24cb05387243d9c09aca66eeb"} Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.498870 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" event={"ID":"7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f","Type":"ContainerStarted","Data":"56bd7836f8e0b553a3c63a9265c61be4a711eab11f1cc1f2f13116b5e0b3451d"} Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.500066 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" event={"ID":"4512b6ab-53d0-435f-bdfc-5f28ba454fd6","Type":"ContainerStarted","Data":"d44f37e7809c1071592358b1aea1eaefb05992d4b1e5e990c93dc54d862a1cfe"} Jan 31 07:46:54 crc kubenswrapper[4826]: I0131 07:46:54.516845 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-799976d686-l6lw6" podStartSLOduration=1.516823927 podStartE2EDuration="1.516823927s" podCreationTimestamp="2026-01-31 07:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:46:54.510729503 +0000 UTC m=+646.364615882" watchObservedRunningTime="2026-01-31 07:46:54.516823927 +0000 UTC m=+646.370710286" Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.526409 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" event={"ID":"7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f","Type":"ContainerStarted","Data":"5c0b67c23a36e0410aadfde60c2f31f923bd8617d972660b684eaea7806d1c19"} Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.529132 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" event={"ID":"4512b6ab-53d0-435f-bdfc-5f28ba454fd6","Type":"ContainerStarted","Data":"b3886d04a7e9bbf10904fdd2f8bff3cd8d6cd50650bf309e9a8dafa4894db278"} Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.529224 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.530773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" event={"ID":"ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1","Type":"ContainerStarted","Data":"006d4077744d3265473e53a1c6f27409599d6e1dee1d90065354aa385ec124b0"} Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.533167 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2cq5c" event={"ID":"a87f432f-3723-4583-9e46-88b0fd950be3","Type":"ContainerStarted","Data":"dae893e68037c20407a6508330e42e123e921a471aa2d1563823ba0cde576b2e"} Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.533285 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.559631 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" podStartSLOduration=2.316561757 podStartE2EDuration="4.559611676s" podCreationTimestamp="2026-01-31 07:46:53 +0000 UTC" firstStartedPulling="2026-01-31 07:46:54.193126653 +0000 UTC m=+646.047013012" lastFinishedPulling="2026-01-31 07:46:56.436176562 +0000 UTC m=+648.290062931" observedRunningTime="2026-01-31 07:46:57.554802638 +0000 UTC m=+649.408689077" watchObservedRunningTime="2026-01-31 07:46:57.559611676 +0000 UTC m=+649.413498035" Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.574247 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2cq5c" podStartSLOduration=1.570517154 podStartE2EDuration="4.574227564s" podCreationTimestamp="2026-01-31 07:46:53 +0000 UTC" firstStartedPulling="2026-01-31 07:46:53.43271773 +0000 UTC m=+645.286604079" lastFinishedPulling="2026-01-31 07:46:56.4364281 +0000 UTC m=+648.290314489" observedRunningTime="2026-01-31 07:46:57.571309491 +0000 UTC m=+649.425195910" watchObservedRunningTime="2026-01-31 07:46:57.574227564 +0000 UTC m=+649.428113923" Jan 31 07:46:57 crc kubenswrapper[4826]: I0131 07:46:57.590704 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-778w9" podStartSLOduration=2.478936333 podStartE2EDuration="4.590687205s" podCreationTimestamp="2026-01-31 07:46:53 +0000 UTC" firstStartedPulling="2026-01-31 07:46:54.324612156 +0000 UTC m=+646.178498515" lastFinishedPulling="2026-01-31 07:46:56.436363018 +0000 UTC m=+648.290249387" observedRunningTime="2026-01-31 07:46:57.584354394 +0000 UTC m=+649.438240753" watchObservedRunningTime="2026-01-31 07:46:57.590687205 +0000 UTC m=+649.444573564" Jan 31 07:46:59 crc kubenswrapper[4826]: I0131 07:46:59.551250 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" event={"ID":"7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f","Type":"ContainerStarted","Data":"d24c9ab2750b354912897e33bb8a254d2607907e82519f4b0f679c7de7dfc394"} Jan 31 07:47:03 crc kubenswrapper[4826]: I0131 07:47:03.452753 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2cq5c" Jan 31 07:47:03 crc kubenswrapper[4826]: I0131 07:47:03.485098 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-djs56" podStartSLOduration=5.60433604 podStartE2EDuration="10.485061502s" podCreationTimestamp="2026-01-31 07:46:53 +0000 UTC" firstStartedPulling="2026-01-31 07:46:53.589491687 +0000 UTC m=+645.443378066" lastFinishedPulling="2026-01-31 07:46:58.470217169 +0000 UTC m=+650.324103528" observedRunningTime="2026-01-31 07:46:59.567542406 +0000 UTC m=+651.421428845" watchObservedRunningTime="2026-01-31 07:47:03.485061502 +0000 UTC m=+655.338947901" Jan 31 07:47:03 crc kubenswrapper[4826]: I0131 07:47:03.696934 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:47:03 crc kubenswrapper[4826]: I0131 07:47:03.697022 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:47:03 crc kubenswrapper[4826]: I0131 07:47:03.705167 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:47:04 crc kubenswrapper[4826]: I0131 07:47:04.590324 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-799976d686-l6lw6" Jan 31 07:47:04 crc kubenswrapper[4826]: I0131 07:47:04.635936 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hkw8j"] Jan 31 07:47:13 crc kubenswrapper[4826]: I0131 07:47:13.987541 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-b5djj" Jan 31 07:47:27 crc kubenswrapper[4826]: I0131 07:47:27.926651 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z"] Jan 31 07:47:27 crc kubenswrapper[4826]: I0131 07:47:27.928316 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:27 crc kubenswrapper[4826]: I0131 07:47:27.931155 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 07:47:27 crc kubenswrapper[4826]: I0131 07:47:27.946115 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z"] Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.030636 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb48v\" (UniqueName: \"kubernetes.io/projected/59e8fc46-08a4-470a-be23-f53dbd0831d0-kube-api-access-zb48v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.030706 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.030746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.132138 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb48v\" (UniqueName: \"kubernetes.io/projected/59e8fc46-08a4-470a-be23-f53dbd0831d0-kube-api-access-zb48v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.132210 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.132241 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.132688 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.133025 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.166087 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb48v\" (UniqueName: \"kubernetes.io/projected/59e8fc46-08a4-470a-be23-f53dbd0831d0-kube-api-access-zb48v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.254284 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.473913 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z"] Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.760697 4826 generic.go:334] "Generic (PLEG): container finished" podID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerID="44b9ffc3710f10d538358953e8a622f09a627d797261f8fb9d3c680646f85526" exitCode=0 Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.760747 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" event={"ID":"59e8fc46-08a4-470a-be23-f53dbd0831d0","Type":"ContainerDied","Data":"44b9ffc3710f10d538358953e8a622f09a627d797261f8fb9d3c680646f85526"} Jan 31 07:47:28 crc kubenswrapper[4826]: I0131 07:47:28.760777 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" event={"ID":"59e8fc46-08a4-470a-be23-f53dbd0831d0","Type":"ContainerStarted","Data":"7d9771746fd73b690fcf5cbe70b743dcef72ba31923069616ae5149d29881e81"} Jan 31 07:47:29 crc kubenswrapper[4826]: I0131 07:47:29.699683 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hkw8j" podUID="1482e43a-84a4-42ed-a605-37cc519dd5ef" containerName="console" containerID="cri-o://82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a" gracePeriod=15 Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.138047 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hkw8j_1482e43a-84a4-42ed-a605-37cc519dd5ef/console/0.log" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.138102 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266186 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-oauth-serving-cert\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266624 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-config\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-trusted-ca-bundle\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266695 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-serving-cert\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266723 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28lqd\" (UniqueName: \"kubernetes.io/projected/1482e43a-84a4-42ed-a605-37cc519dd5ef-kube-api-access-28lqd\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266750 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-service-ca\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.266798 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-oauth-config\") pod \"1482e43a-84a4-42ed-a605-37cc519dd5ef\" (UID: \"1482e43a-84a4-42ed-a605-37cc519dd5ef\") " Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.267986 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-config" (OuterVolumeSpecName: "console-config") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.268002 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.268008 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-service-ca" (OuterVolumeSpecName: "service-ca") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.269684 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.273282 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.273718 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.274271 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1482e43a-84a4-42ed-a605-37cc519dd5ef-kube-api-access-28lqd" (OuterVolumeSpecName: "kube-api-access-28lqd") pod "1482e43a-84a4-42ed-a605-37cc519dd5ef" (UID: "1482e43a-84a4-42ed-a605-37cc519dd5ef"). InnerVolumeSpecName "kube-api-access-28lqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368419 4826 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368478 4826 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368505 4826 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368529 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368551 4826 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1482e43a-84a4-42ed-a605-37cc519dd5ef-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368573 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28lqd\" (UniqueName: \"kubernetes.io/projected/1482e43a-84a4-42ed-a605-37cc519dd5ef-kube-api-access-28lqd\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.368629 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1482e43a-84a4-42ed-a605-37cc519dd5ef-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.775998 4826 generic.go:334] "Generic (PLEG): container finished" podID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerID="b9d76476663ddb0e1180114b1e0d9a664da7d1c6c8d4a5e5f7a8de5d34444c25" exitCode=0 Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.776086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" event={"ID":"59e8fc46-08a4-470a-be23-f53dbd0831d0","Type":"ContainerDied","Data":"b9d76476663ddb0e1180114b1e0d9a664da7d1c6c8d4a5e5f7a8de5d34444c25"} Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.778377 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hkw8j_1482e43a-84a4-42ed-a605-37cc519dd5ef/console/0.log" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.778433 4826 generic.go:334] "Generic (PLEG): container finished" podID="1482e43a-84a4-42ed-a605-37cc519dd5ef" containerID="82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a" exitCode=2 Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.778468 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hkw8j" event={"ID":"1482e43a-84a4-42ed-a605-37cc519dd5ef","Type":"ContainerDied","Data":"82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a"} Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.778497 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hkw8j" event={"ID":"1482e43a-84a4-42ed-a605-37cc519dd5ef","Type":"ContainerDied","Data":"19d271245fb7791eb591bd8dde736cb425a8da78707a875c94ae05c1f78766aa"} Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.778512 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hkw8j" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.778517 4826 scope.go:117] "RemoveContainer" containerID="82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.802957 4826 scope.go:117] "RemoveContainer" containerID="82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a" Jan 31 07:47:30 crc kubenswrapper[4826]: E0131 07:47:30.807069 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a\": container with ID starting with 82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a not found: ID does not exist" containerID="82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.807115 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a"} err="failed to get container status \"82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a\": rpc error: code = NotFound desc = could not find container \"82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a\": container with ID starting with 82588ffd5423005c74768fcd38c4e8356f7e845ad18d4cdcb22f693bddaad45a not found: ID does not exist" Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.808569 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hkw8j"] Jan 31 07:47:30 crc kubenswrapper[4826]: I0131 07:47:30.821568 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hkw8j"] Jan 31 07:47:31 crc kubenswrapper[4826]: I0131 07:47:31.787443 4826 generic.go:334] "Generic (PLEG): container finished" podID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerID="0fe6ef9e41de4baeb6c3bf3ab5c3e28d1563e2a24e09083281d6c875f8d3b062" exitCode=0 Jan 31 07:47:31 crc kubenswrapper[4826]: I0131 07:47:31.787549 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" event={"ID":"59e8fc46-08a4-470a-be23-f53dbd0831d0","Type":"ContainerDied","Data":"0fe6ef9e41de4baeb6c3bf3ab5c3e28d1563e2a24e09083281d6c875f8d3b062"} Jan 31 07:47:32 crc kubenswrapper[4826]: I0131 07:47:32.822850 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1482e43a-84a4-42ed-a605-37cc519dd5ef" path="/var/lib/kubelet/pods/1482e43a-84a4-42ed-a605-37cc519dd5ef/volumes" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.096889 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.205411 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-util\") pod \"59e8fc46-08a4-470a-be23-f53dbd0831d0\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.205556 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb48v\" (UniqueName: \"kubernetes.io/projected/59e8fc46-08a4-470a-be23-f53dbd0831d0-kube-api-access-zb48v\") pod \"59e8fc46-08a4-470a-be23-f53dbd0831d0\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.205779 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-bundle\") pod \"59e8fc46-08a4-470a-be23-f53dbd0831d0\" (UID: \"59e8fc46-08a4-470a-be23-f53dbd0831d0\") " Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.206883 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-bundle" (OuterVolumeSpecName: "bundle") pod "59e8fc46-08a4-470a-be23-f53dbd0831d0" (UID: "59e8fc46-08a4-470a-be23-f53dbd0831d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.212892 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e8fc46-08a4-470a-be23-f53dbd0831d0-kube-api-access-zb48v" (OuterVolumeSpecName: "kube-api-access-zb48v") pod "59e8fc46-08a4-470a-be23-f53dbd0831d0" (UID: "59e8fc46-08a4-470a-be23-f53dbd0831d0"). InnerVolumeSpecName "kube-api-access-zb48v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.220448 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-util" (OuterVolumeSpecName: "util") pod "59e8fc46-08a4-470a-be23-f53dbd0831d0" (UID: "59e8fc46-08a4-470a-be23-f53dbd0831d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.307398 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.307449 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59e8fc46-08a4-470a-be23-f53dbd0831d0-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.307460 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb48v\" (UniqueName: \"kubernetes.io/projected/59e8fc46-08a4-470a-be23-f53dbd0831d0-kube-api-access-zb48v\") on node \"crc\" DevicePath \"\"" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.815124 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" event={"ID":"59e8fc46-08a4-470a-be23-f53dbd0831d0","Type":"ContainerDied","Data":"7d9771746fd73b690fcf5cbe70b743dcef72ba31923069616ae5149d29881e81"} Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.815671 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9771746fd73b690fcf5cbe70b743dcef72ba31923069616ae5149d29881e81" Jan 31 07:47:33 crc kubenswrapper[4826]: I0131 07:47:33.815206 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.597590 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5"] Jan 31 07:47:42 crc kubenswrapper[4826]: E0131 07:47:42.598360 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1482e43a-84a4-42ed-a605-37cc519dd5ef" containerName="console" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.598374 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1482e43a-84a4-42ed-a605-37cc519dd5ef" containerName="console" Jan 31 07:47:42 crc kubenswrapper[4826]: E0131 07:47:42.598392 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="extract" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.598398 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="extract" Jan 31 07:47:42 crc kubenswrapper[4826]: E0131 07:47:42.598405 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="pull" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.598410 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="pull" Jan 31 07:47:42 crc kubenswrapper[4826]: E0131 07:47:42.598421 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="util" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.598426 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="util" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.598519 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1482e43a-84a4-42ed-a605-37cc519dd5ef" containerName="console" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.598532 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e8fc46-08a4-470a-be23-f53dbd0831d0" containerName="extract" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.599518 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.611152 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d7n97" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.611376 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.611514 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.611671 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.612037 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.623600 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5"] Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.623869 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db79dc26-9fad-4b84-83bb-215331a5483a-webhook-cert\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.623913 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmdp\" (UniqueName: \"kubernetes.io/projected/db79dc26-9fad-4b84-83bb-215331a5483a-kube-api-access-ljmdp\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.623933 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db79dc26-9fad-4b84-83bb-215331a5483a-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.725082 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmdp\" (UniqueName: \"kubernetes.io/projected/db79dc26-9fad-4b84-83bb-215331a5483a-kube-api-access-ljmdp\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.725301 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db79dc26-9fad-4b84-83bb-215331a5483a-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.725503 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db79dc26-9fad-4b84-83bb-215331a5483a-webhook-cert\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.730483 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db79dc26-9fad-4b84-83bb-215331a5483a-webhook-cert\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.743681 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db79dc26-9fad-4b84-83bb-215331a5483a-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.756390 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmdp\" (UniqueName: \"kubernetes.io/projected/db79dc26-9fad-4b84-83bb-215331a5483a-kube-api-access-ljmdp\") pod \"metallb-operator-controller-manager-5bdc44d88d-c4tc5\" (UID: \"db79dc26-9fad-4b84-83bb-215331a5483a\") " pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.871397 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs"] Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.872228 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.874088 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-v25d8" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.874229 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.874480 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.896923 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs"] Jan 31 07:47:42 crc kubenswrapper[4826]: I0131 07:47:42.925041 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.029381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17cb81be-ee8b-4a61-86ed-569e1a552d3e-webhook-cert\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.029669 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17cb81be-ee8b-4a61-86ed-569e1a552d3e-apiservice-cert\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.029692 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjl2\" (UniqueName: \"kubernetes.io/projected/17cb81be-ee8b-4a61-86ed-569e1a552d3e-kube-api-access-qjjl2\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.130433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17cb81be-ee8b-4a61-86ed-569e1a552d3e-apiservice-cert\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.130469 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjl2\" (UniqueName: \"kubernetes.io/projected/17cb81be-ee8b-4a61-86ed-569e1a552d3e-kube-api-access-qjjl2\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.130546 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17cb81be-ee8b-4a61-86ed-569e1a552d3e-webhook-cert\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.143707 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17cb81be-ee8b-4a61-86ed-569e1a552d3e-webhook-cert\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.143714 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17cb81be-ee8b-4a61-86ed-569e1a552d3e-apiservice-cert\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.146740 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjl2\" (UniqueName: \"kubernetes.io/projected/17cb81be-ee8b-4a61-86ed-569e1a552d3e-kube-api-access-qjjl2\") pod \"metallb-operator-webhook-server-797dbbd75c-cvsqs\" (UID: \"17cb81be-ee8b-4a61-86ed-569e1a552d3e\") " pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.187294 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.344293 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5"] Jan 31 07:47:43 crc kubenswrapper[4826]: W0131 07:47:43.350265 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb79dc26_9fad_4b84_83bb_215331a5483a.slice/crio-9591e06447ffbc408b185b101f6f1585050735b03dde2b622848296da30c791a WatchSource:0}: Error finding container 9591e06447ffbc408b185b101f6f1585050735b03dde2b622848296da30c791a: Status 404 returned error can't find the container with id 9591e06447ffbc408b185b101f6f1585050735b03dde2b622848296da30c791a Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.388526 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs"] Jan 31 07:47:43 crc kubenswrapper[4826]: W0131 07:47:43.392677 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17cb81be_ee8b_4a61_86ed_569e1a552d3e.slice/crio-ec715b84ac67190dc2ba2defab0d4e1bd7af8cf63b112bff6e0a2c768b6cbc4b WatchSource:0}: Error finding container ec715b84ac67190dc2ba2defab0d4e1bd7af8cf63b112bff6e0a2c768b6cbc4b: Status 404 returned error can't find the container with id ec715b84ac67190dc2ba2defab0d4e1bd7af8cf63b112bff6e0a2c768b6cbc4b Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.867653 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" event={"ID":"17cb81be-ee8b-4a61-86ed-569e1a552d3e","Type":"ContainerStarted","Data":"ec715b84ac67190dc2ba2defab0d4e1bd7af8cf63b112bff6e0a2c768b6cbc4b"} Jan 31 07:47:43 crc kubenswrapper[4826]: I0131 07:47:43.868835 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" event={"ID":"db79dc26-9fad-4b84-83bb-215331a5483a","Type":"ContainerStarted","Data":"9591e06447ffbc408b185b101f6f1585050735b03dde2b622848296da30c791a"} Jan 31 07:47:47 crc kubenswrapper[4826]: I0131 07:47:47.897622 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" event={"ID":"db79dc26-9fad-4b84-83bb-215331a5483a","Type":"ContainerStarted","Data":"1af1bd193c56f9effd9071cef3b9fb7ebe3e0efe666814d0e2641ea158fc5f74"} Jan 31 07:47:47 crc kubenswrapper[4826]: I0131 07:47:47.897992 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:47:47 crc kubenswrapper[4826]: I0131 07:47:47.899368 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" event={"ID":"17cb81be-ee8b-4a61-86ed-569e1a552d3e","Type":"ContainerStarted","Data":"e56aba79e75d961d07c9cfa13d9c2fcb44f5066e76d2fe4715e463298d5d7b55"} Jan 31 07:47:47 crc kubenswrapper[4826]: I0131 07:47:47.899571 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:47:47 crc kubenswrapper[4826]: I0131 07:47:47.916633 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" podStartSLOduration=1.6995124339999998 podStartE2EDuration="5.916611652s" podCreationTimestamp="2026-01-31 07:47:42 +0000 UTC" firstStartedPulling="2026-01-31 07:47:43.3524052 +0000 UTC m=+695.206291559" lastFinishedPulling="2026-01-31 07:47:47.569504418 +0000 UTC m=+699.423390777" observedRunningTime="2026-01-31 07:47:47.914350158 +0000 UTC m=+699.768236547" watchObservedRunningTime="2026-01-31 07:47:47.916611652 +0000 UTC m=+699.770498021" Jan 31 07:47:47 crc kubenswrapper[4826]: I0131 07:47:47.937279 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" podStartSLOduration=1.746786507 podStartE2EDuration="5.937258983s" podCreationTimestamp="2026-01-31 07:47:42 +0000 UTC" firstStartedPulling="2026-01-31 07:47:43.396060339 +0000 UTC m=+695.249946698" lastFinishedPulling="2026-01-31 07:47:47.586532815 +0000 UTC m=+699.440419174" observedRunningTime="2026-01-31 07:47:47.935358639 +0000 UTC m=+699.789245008" watchObservedRunningTime="2026-01-31 07:47:47.937258983 +0000 UTC m=+699.791145342" Jan 31 07:48:03 crc kubenswrapper[4826]: I0131 07:48:03.194682 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-797dbbd75c-cvsqs" Jan 31 07:48:22 crc kubenswrapper[4826]: I0131 07:48:22.928461 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bdc44d88d-c4tc5" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.718958 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6wcdg"] Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.732546 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.736721 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.736735 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bvkbr" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.736640 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.743518 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn"] Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.749791 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.751168 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.756737 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn"] Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820481 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-metrics\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820536 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrtkc\" (UniqueName: \"kubernetes.io/projected/6e2beca9-fa5b-4978-b763-1b9a9283e8fc-kube-api-access-nrtkc\") pod \"frr-k8s-webhook-server-7df86c4f6c-blwgn\" (UID: \"6e2beca9-fa5b-4978-b763-1b9a9283e8fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820582 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-conf\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820609 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-reloader\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d84d4444-e704-4e8e-beb5-38ad127d66d8-metrics-certs\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820697 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-sockets\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820718 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twpd\" (UniqueName: \"kubernetes.io/projected/d84d4444-e704-4e8e-beb5-38ad127d66d8-kube-api-access-8twpd\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820742 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-startup\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.820841 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2beca9-fa5b-4978-b763-1b9a9283e8fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-blwgn\" (UID: \"6e2beca9-fa5b-4978-b763-1b9a9283e8fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.831001 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lzpb4"] Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.831896 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lzpb4" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.834488 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.834563 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pdwnp" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.835357 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.835601 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.855517 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-5fjcs"] Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.856445 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.858172 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.876746 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5fjcs"] Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922035 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c9099222-adcc-4af7-892f-7c28ea834fda-metallb-excludel2\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922096 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtxm\" (UniqueName: \"kubernetes.io/projected/c9099222-adcc-4af7-892f-7c28ea834fda-kube-api-access-4xtxm\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922196 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-metrics\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922243 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrtkc\" (UniqueName: \"kubernetes.io/projected/6e2beca9-fa5b-4978-b763-1b9a9283e8fc-kube-api-access-nrtkc\") pod \"frr-k8s-webhook-server-7df86c4f6c-blwgn\" (UID: \"6e2beca9-fa5b-4978-b763-1b9a9283e8fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b772a768-496d-4cab-9480-e2f5966c417b-metrics-certs\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922326 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b772a768-496d-4cab-9480-e2f5966c417b-cert\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922352 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-metrics-certs\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-conf\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922436 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-reloader\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922463 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhzg\" (UniqueName: \"kubernetes.io/projected/b772a768-496d-4cab-9480-e2f5966c417b-kube-api-access-qzhzg\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922483 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d84d4444-e704-4e8e-beb5-38ad127d66d8-metrics-certs\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922626 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-sockets\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922667 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twpd\" (UniqueName: \"kubernetes.io/projected/d84d4444-e704-4e8e-beb5-38ad127d66d8-kube-api-access-8twpd\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922697 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-startup\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922720 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2beca9-fa5b-4978-b763-1b9a9283e8fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-blwgn\" (UID: \"6e2beca9-fa5b-4978-b763-1b9a9283e8fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922906 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-metrics\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922922 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-conf\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922952 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-reloader\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.922996 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-sockets\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.923621 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d84d4444-e704-4e8e-beb5-38ad127d66d8-frr-startup\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.929696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e2beca9-fa5b-4978-b763-1b9a9283e8fc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-blwgn\" (UID: \"6e2beca9-fa5b-4978-b763-1b9a9283e8fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.937363 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d84d4444-e704-4e8e-beb5-38ad127d66d8-metrics-certs\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.938211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrtkc\" (UniqueName: \"kubernetes.io/projected/6e2beca9-fa5b-4978-b763-1b9a9283e8fc-kube-api-access-nrtkc\") pod \"frr-k8s-webhook-server-7df86c4f6c-blwgn\" (UID: \"6e2beca9-fa5b-4978-b763-1b9a9283e8fc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:23 crc kubenswrapper[4826]: I0131 07:48:23.938207 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twpd\" (UniqueName: \"kubernetes.io/projected/d84d4444-e704-4e8e-beb5-38ad127d66d8-kube-api-access-8twpd\") pod \"frr-k8s-6wcdg\" (UID: \"d84d4444-e704-4e8e-beb5-38ad127d66d8\") " pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023476 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b772a768-496d-4cab-9480-e2f5966c417b-cert\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023532 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-metrics-certs\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023572 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzhzg\" (UniqueName: \"kubernetes.io/projected/b772a768-496d-4cab-9480-e2f5966c417b-kube-api-access-qzhzg\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023648 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c9099222-adcc-4af7-892f-7c28ea834fda-metallb-excludel2\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023677 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtxm\" (UniqueName: \"kubernetes.io/projected/c9099222-adcc-4af7-892f-7c28ea834fda-kube-api-access-4xtxm\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.023711 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b772a768-496d-4cab-9480-e2f5966c417b-metrics-certs\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: E0131 07:48:24.023748 4826 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 07:48:24 crc kubenswrapper[4826]: E0131 07:48:24.023807 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist podName:c9099222-adcc-4af7-892f-7c28ea834fda nodeName:}" failed. No retries permitted until 2026-01-31 07:48:24.52378725 +0000 UTC m=+736.377673609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist") pod "speaker-lzpb4" (UID: "c9099222-adcc-4af7-892f-7c28ea834fda") : secret "metallb-memberlist" not found Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.025467 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c9099222-adcc-4af7-892f-7c28ea834fda-metallb-excludel2\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.025628 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.027016 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b772a768-496d-4cab-9480-e2f5966c417b-metrics-certs\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.028086 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-metrics-certs\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.037806 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b772a768-496d-4cab-9480-e2f5966c417b-cert\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.040081 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtxm\" (UniqueName: \"kubernetes.io/projected/c9099222-adcc-4af7-892f-7c28ea834fda-kube-api-access-4xtxm\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.042284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzhzg\" (UniqueName: \"kubernetes.io/projected/b772a768-496d-4cab-9480-e2f5966c417b-kube-api-access-qzhzg\") pod \"controller-6968d8fdc4-5fjcs\" (UID: \"b772a768-496d-4cab-9480-e2f5966c417b\") " pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.050022 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.066775 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.169766 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.375375 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5fjcs"] Jan 31 07:48:24 crc kubenswrapper[4826]: W0131 07:48:24.381162 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb772a768_496d_4cab_9480_e2f5966c417b.slice/crio-06d609d8de5733a6bbac6acb542af195b9bc236c01d7768976ed2282531a9f50 WatchSource:0}: Error finding container 06d609d8de5733a6bbac6acb542af195b9bc236c01d7768976ed2282531a9f50: Status 404 returned error can't find the container with id 06d609d8de5733a6bbac6acb542af195b9bc236c01d7768976ed2282531a9f50 Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.474118 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn"] Jan 31 07:48:24 crc kubenswrapper[4826]: W0131 07:48:24.478522 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e2beca9_fa5b_4978_b763_1b9a9283e8fc.slice/crio-b6fb4fe2dd83c3572fc9b4ba2da8bb690db419b7d97bb1708691521d68631f73 WatchSource:0}: Error finding container b6fb4fe2dd83c3572fc9b4ba2da8bb690db419b7d97bb1708691521d68631f73: Status 404 returned error can't find the container with id b6fb4fe2dd83c3572fc9b4ba2da8bb690db419b7d97bb1708691521d68631f73 Jan 31 07:48:24 crc kubenswrapper[4826]: I0131 07:48:24.528600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:24 crc kubenswrapper[4826]: E0131 07:48:24.528826 4826 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 07:48:24 crc kubenswrapper[4826]: E0131 07:48:24.528886 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist podName:c9099222-adcc-4af7-892f-7c28ea834fda nodeName:}" failed. No retries permitted until 2026-01-31 07:48:25.528869793 +0000 UTC m=+737.382756172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist") pod "speaker-lzpb4" (UID: "c9099222-adcc-4af7-892f-7c28ea834fda") : secret "metallb-memberlist" not found Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.131539 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"f33b9acbc21a1aafe67938e7634246f5eda4abed65511619c8ce2a884194ebca"} Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.135418 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5fjcs" event={"ID":"b772a768-496d-4cab-9480-e2f5966c417b","Type":"ContainerStarted","Data":"b89e7cea5e4146abe0c96ba78447d3d359148a3cf8a7478d0909538df84a59a3"} Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.135497 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5fjcs" event={"ID":"b772a768-496d-4cab-9480-e2f5966c417b","Type":"ContainerStarted","Data":"de46dd9b7b1ade8acd975e5fe7cc21295b3c3e7653641977c54d2cda7cc8dad6"} Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.135521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5fjcs" event={"ID":"b772a768-496d-4cab-9480-e2f5966c417b","Type":"ContainerStarted","Data":"06d609d8de5733a6bbac6acb542af195b9bc236c01d7768976ed2282531a9f50"} Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.136094 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.137052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" event={"ID":"6e2beca9-fa5b-4978-b763-1b9a9283e8fc","Type":"ContainerStarted","Data":"b6fb4fe2dd83c3572fc9b4ba2da8bb690db419b7d97bb1708691521d68631f73"} Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.156929 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-5fjcs" podStartSLOduration=2.156906557 podStartE2EDuration="2.156906557s" podCreationTimestamp="2026-01-31 07:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:48:25.153467039 +0000 UTC m=+737.007353468" watchObservedRunningTime="2026-01-31 07:48:25.156906557 +0000 UTC m=+737.010792916" Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.541843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.565313 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9099222-adcc-4af7-892f-7c28ea834fda-memberlist\") pod \"speaker-lzpb4\" (UID: \"c9099222-adcc-4af7-892f-7c28ea834fda\") " pod="metallb-system/speaker-lzpb4" Jan 31 07:48:25 crc kubenswrapper[4826]: I0131 07:48:25.646549 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lzpb4" Jan 31 07:48:26 crc kubenswrapper[4826]: I0131 07:48:26.143994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lzpb4" event={"ID":"c9099222-adcc-4af7-892f-7c28ea834fda","Type":"ContainerStarted","Data":"7baafbc60e1a0eb05a10c0d09698d49e18ddc3a6ec2d13febf3ece179a4d69f4"} Jan 31 07:48:26 crc kubenswrapper[4826]: I0131 07:48:26.144409 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lzpb4" event={"ID":"c9099222-adcc-4af7-892f-7c28ea834fda","Type":"ContainerStarted","Data":"8f7fba083b948980b5308cca25b485038deedc7d6e92c40c78574059e5f8a164"} Jan 31 07:48:27 crc kubenswrapper[4826]: I0131 07:48:27.151152 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lzpb4" event={"ID":"c9099222-adcc-4af7-892f-7c28ea834fda","Type":"ContainerStarted","Data":"a791f330416a6a75aa20a756f5dc69e76d60dc1493552ea5b97e076c9ba5b9e0"} Jan 31 07:48:27 crc kubenswrapper[4826]: I0131 07:48:27.152015 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lzpb4" Jan 31 07:48:27 crc kubenswrapper[4826]: I0131 07:48:27.167473 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lzpb4" podStartSLOduration=4.167443848 podStartE2EDuration="4.167443848s" podCreationTimestamp="2026-01-31 07:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:48:27.165843052 +0000 UTC m=+739.019729401" watchObservedRunningTime="2026-01-31 07:48:27.167443848 +0000 UTC m=+739.021330207" Jan 31 07:48:27 crc kubenswrapper[4826]: I0131 07:48:27.376741 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:48:27 crc kubenswrapper[4826]: I0131 07:48:27.376824 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:48:32 crc kubenswrapper[4826]: I0131 07:48:32.188385 4826 generic.go:334] "Generic (PLEG): container finished" podID="d84d4444-e704-4e8e-beb5-38ad127d66d8" containerID="a7ff7269ddcd72436ab4ab8830fa3549af798313a1515b2fc2d921d3433d3017" exitCode=0 Jan 31 07:48:32 crc kubenswrapper[4826]: I0131 07:48:32.188472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerDied","Data":"a7ff7269ddcd72436ab4ab8830fa3549af798313a1515b2fc2d921d3433d3017"} Jan 31 07:48:32 crc kubenswrapper[4826]: I0131 07:48:32.190661 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" event={"ID":"6e2beca9-fa5b-4978-b763-1b9a9283e8fc","Type":"ContainerStarted","Data":"99ee1811cd880bd5af420d8d696e2130efaf19b6958094ac3ca0016f15d7bc62"} Jan 31 07:48:32 crc kubenswrapper[4826]: I0131 07:48:32.190922 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:33 crc kubenswrapper[4826]: I0131 07:48:33.197654 4826 generic.go:334] "Generic (PLEG): container finished" podID="d84d4444-e704-4e8e-beb5-38ad127d66d8" containerID="ea5983018ed855a9f691f2129e6cec9d0fdac4a16abf3650af8989bb9a3c6857" exitCode=0 Jan 31 07:48:33 crc kubenswrapper[4826]: I0131 07:48:33.197729 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerDied","Data":"ea5983018ed855a9f691f2129e6cec9d0fdac4a16abf3650af8989bb9a3c6857"} Jan 31 07:48:33 crc kubenswrapper[4826]: I0131 07:48:33.231862 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" podStartSLOduration=3.568957397 podStartE2EDuration="10.231847458s" podCreationTimestamp="2026-01-31 07:48:23 +0000 UTC" firstStartedPulling="2026-01-31 07:48:24.480523696 +0000 UTC m=+736.334410055" lastFinishedPulling="2026-01-31 07:48:31.143413757 +0000 UTC m=+742.997300116" observedRunningTime="2026-01-31 07:48:32.230683809 +0000 UTC m=+744.084570168" watchObservedRunningTime="2026-01-31 07:48:33.231847458 +0000 UTC m=+745.085733817" Jan 31 07:48:34 crc kubenswrapper[4826]: I0131 07:48:34.178305 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-5fjcs" Jan 31 07:48:34 crc kubenswrapper[4826]: I0131 07:48:34.224413 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerDied","Data":"a1c73912bd913493903b57754ae92032d543bd463398d48f95260bff2b9a265c"} Jan 31 07:48:34 crc kubenswrapper[4826]: I0131 07:48:34.224385 4826 generic.go:334] "Generic (PLEG): container finished" podID="d84d4444-e704-4e8e-beb5-38ad127d66d8" containerID="a1c73912bd913493903b57754ae92032d543bd463398d48f95260bff2b9a265c" exitCode=0 Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237083 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"6eac9144ca5a5115e429cb2a2967b8798b6ce8c33e267207564938b56ff5d7fe"} Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237502 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"ca5c1e6d8abeeb09ddcb5540f44b07128c3f6fd935aa486e25453aae33a9e0a2"} Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237530 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237550 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"df569401fcf29d06749bb4c63178d645e57ca3932b3ce26ea795464811bf3a03"} Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237570 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"c674f1ec57906e9cc807348c093d6e850d32c0a0d2225e8532fb641e27578ded"} Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237591 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"08b0a6cc8555de53c9a88644f935d6983ced5280d3356223c308d9daacc9d62d"} Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.237610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6wcdg" event={"ID":"d84d4444-e704-4e8e-beb5-38ad127d66d8","Type":"ContainerStarted","Data":"d23c96516abcf6c558706b0c0b95baac0f5184dfa9bde9bd565471b2400c887b"} Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.264789 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6wcdg" podStartSLOduration=5.284176229 podStartE2EDuration="12.264768387s" podCreationTimestamp="2026-01-31 07:48:23 +0000 UTC" firstStartedPulling="2026-01-31 07:48:24.186311178 +0000 UTC m=+736.040197537" lastFinishedPulling="2026-01-31 07:48:31.166903326 +0000 UTC m=+743.020789695" observedRunningTime="2026-01-31 07:48:35.263166011 +0000 UTC m=+747.117052410" watchObservedRunningTime="2026-01-31 07:48:35.264768387 +0000 UTC m=+747.118654756" Jan 31 07:48:35 crc kubenswrapper[4826]: I0131 07:48:35.654087 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lzpb4" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.544299 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-smjd8"] Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.546105 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.552991 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7k62s" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.553915 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.558422 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.570888 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-smjd8"] Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.677561 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghj8\" (UniqueName: \"kubernetes.io/projected/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3-kube-api-access-zghj8\") pod \"openstack-operator-index-smjd8\" (UID: \"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3\") " pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.778372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghj8\" (UniqueName: \"kubernetes.io/projected/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3-kube-api-access-zghj8\") pod \"openstack-operator-index-smjd8\" (UID: \"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3\") " pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.801130 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghj8\" (UniqueName: \"kubernetes.io/projected/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3-kube-api-access-zghj8\") pod \"openstack-operator-index-smjd8\" (UID: \"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3\") " pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:38 crc kubenswrapper[4826]: I0131 07:48:38.883211 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:39 crc kubenswrapper[4826]: I0131 07:48:39.050607 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:39 crc kubenswrapper[4826]: I0131 07:48:39.090292 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:39 crc kubenswrapper[4826]: I0131 07:48:39.287323 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-smjd8"] Jan 31 07:48:40 crc kubenswrapper[4826]: I0131 07:48:40.267019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-smjd8" event={"ID":"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3","Type":"ContainerStarted","Data":"6e455ce4392520e79364ea06cbf20948fff8fb29678d23880cddd4b874ec99d3"} Jan 31 07:48:41 crc kubenswrapper[4826]: I0131 07:48:41.714466 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-smjd8"] Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.324464 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-znrb7"] Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.327765 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.330881 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-znrb7"] Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.424858 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98bjb\" (UniqueName: \"kubernetes.io/projected/64897259-2c2d-4152-83eb-17362544f024-kube-api-access-98bjb\") pod \"openstack-operator-index-znrb7\" (UID: \"64897259-2c2d-4152-83eb-17362544f024\") " pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.526322 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98bjb\" (UniqueName: \"kubernetes.io/projected/64897259-2c2d-4152-83eb-17362544f024-kube-api-access-98bjb\") pod \"openstack-operator-index-znrb7\" (UID: \"64897259-2c2d-4152-83eb-17362544f024\") " pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.556545 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98bjb\" (UniqueName: \"kubernetes.io/projected/64897259-2c2d-4152-83eb-17362544f024-kube-api-access-98bjb\") pod \"openstack-operator-index-znrb7\" (UID: \"64897259-2c2d-4152-83eb-17362544f024\") " pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:42 crc kubenswrapper[4826]: I0131 07:48:42.702189 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.091985 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-znrb7"] Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.292251 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-smjd8" event={"ID":"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3","Type":"ContainerStarted","Data":"e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52"} Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.292299 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-smjd8" podUID="e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" containerName="registry-server" containerID="cri-o://e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52" gracePeriod=2 Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.294500 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-znrb7" event={"ID":"64897259-2c2d-4152-83eb-17362544f024","Type":"ContainerStarted","Data":"dd3b8cf3a1e8d2a99bd774b95885279b6da632bd3f270f2f98a5d19fdee96936"} Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.313587 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-smjd8" podStartSLOduration=1.79782789 podStartE2EDuration="5.313558284s" podCreationTimestamp="2026-01-31 07:48:38 +0000 UTC" firstStartedPulling="2026-01-31 07:48:39.292293744 +0000 UTC m=+751.146180113" lastFinishedPulling="2026-01-31 07:48:42.808024148 +0000 UTC m=+754.661910507" observedRunningTime="2026-01-31 07:48:43.310813945 +0000 UTC m=+755.164700304" watchObservedRunningTime="2026-01-31 07:48:43.313558284 +0000 UTC m=+755.167444683" Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.724711 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.845032 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghj8\" (UniqueName: \"kubernetes.io/projected/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3-kube-api-access-zghj8\") pod \"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3\" (UID: \"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3\") " Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.854994 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3-kube-api-access-zghj8" (OuterVolumeSpecName: "kube-api-access-zghj8") pod "e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" (UID: "e376cdfe-5978-4741-b5c0-ad2aeb13dbb3"). InnerVolumeSpecName "kube-api-access-zghj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.909205 4826 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 07:48:43 crc kubenswrapper[4826]: I0131 07:48:43.947054 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghj8\" (UniqueName: \"kubernetes.io/projected/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3-kube-api-access-zghj8\") on node \"crc\" DevicePath \"\"" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.054480 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6wcdg" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.100603 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-blwgn" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.301318 4826 generic.go:334] "Generic (PLEG): container finished" podID="e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" containerID="e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52" exitCode=0 Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.301364 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-smjd8" event={"ID":"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3","Type":"ContainerDied","Data":"e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52"} Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.301629 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-smjd8" event={"ID":"e376cdfe-5978-4741-b5c0-ad2aeb13dbb3","Type":"ContainerDied","Data":"6e455ce4392520e79364ea06cbf20948fff8fb29678d23880cddd4b874ec99d3"} Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.301646 4826 scope.go:117] "RemoveContainer" containerID="e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.301383 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-smjd8" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.304019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-znrb7" event={"ID":"64897259-2c2d-4152-83eb-17362544f024","Type":"ContainerStarted","Data":"0e06ea06695a5c1724ccbbdeb6375bd17c172411a37af68e94c7a1e1521c94dd"} Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.324233 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-znrb7" podStartSLOduration=1.9602192779999998 podStartE2EDuration="2.324213953s" podCreationTimestamp="2026-01-31 07:48:42 +0000 UTC" firstStartedPulling="2026-01-31 07:48:43.102597726 +0000 UTC m=+754.956484075" lastFinishedPulling="2026-01-31 07:48:43.466592401 +0000 UTC m=+755.320478750" observedRunningTime="2026-01-31 07:48:44.319862539 +0000 UTC m=+756.173748918" watchObservedRunningTime="2026-01-31 07:48:44.324213953 +0000 UTC m=+756.178100332" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.330319 4826 scope.go:117] "RemoveContainer" containerID="e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52" Jan 31 07:48:44 crc kubenswrapper[4826]: E0131 07:48:44.330781 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52\": container with ID starting with e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52 not found: ID does not exist" containerID="e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.330818 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52"} err="failed to get container status \"e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52\": rpc error: code = NotFound desc = could not find container \"e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52\": container with ID starting with e6a082c3d0887a2901bb3c27007ca9d0ca71d7937d80da0dca503f1a31332d52 not found: ID does not exist" Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.339096 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-smjd8"] Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.344679 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-smjd8"] Jan 31 07:48:44 crc kubenswrapper[4826]: I0131 07:48:44.817250 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" path="/var/lib/kubelet/pods/e376cdfe-5978-4741-b5c0-ad2aeb13dbb3/volumes" Jan 31 07:48:52 crc kubenswrapper[4826]: I0131 07:48:52.703155 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:52 crc kubenswrapper[4826]: I0131 07:48:52.704044 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:52 crc kubenswrapper[4826]: I0131 07:48:52.743559 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:53 crc kubenswrapper[4826]: I0131 07:48:53.392920 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-znrb7" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.571890 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q"] Jan 31 07:48:54 crc kubenswrapper[4826]: E0131 07:48:54.572245 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" containerName="registry-server" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.572266 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" containerName="registry-server" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.572452 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e376cdfe-5978-4741-b5c0-ad2aeb13dbb3" containerName="registry-server" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.573741 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.577415 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2qzzk" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.587174 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q"] Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.702588 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-util\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.702661 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rg7\" (UniqueName: \"kubernetes.io/projected/5033093f-2406-4bad-82e4-2b72dec635f5-kube-api-access-59rg7\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.702772 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-bundle\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.804257 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-util\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.804321 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59rg7\" (UniqueName: \"kubernetes.io/projected/5033093f-2406-4bad-82e4-2b72dec635f5-kube-api-access-59rg7\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.804408 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-bundle\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.805187 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-util\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.805439 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-bundle\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.837963 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59rg7\" (UniqueName: \"kubernetes.io/projected/5033093f-2406-4bad-82e4-2b72dec635f5-kube-api-access-59rg7\") pod \"0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:54 crc kubenswrapper[4826]: I0131 07:48:54.900501 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:55 crc kubenswrapper[4826]: I0131 07:48:55.306356 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q"] Jan 31 07:48:55 crc kubenswrapper[4826]: W0131 07:48:55.318535 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5033093f_2406_4bad_82e4_2b72dec635f5.slice/crio-7e96e14626eaed4bd049145cc5a2a2e10c1f424e77800affb7dc97f82931a6c3 WatchSource:0}: Error finding container 7e96e14626eaed4bd049145cc5a2a2e10c1f424e77800affb7dc97f82931a6c3: Status 404 returned error can't find the container with id 7e96e14626eaed4bd049145cc5a2a2e10c1f424e77800affb7dc97f82931a6c3 Jan 31 07:48:55 crc kubenswrapper[4826]: I0131 07:48:55.380713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" event={"ID":"5033093f-2406-4bad-82e4-2b72dec635f5","Type":"ContainerStarted","Data":"7e96e14626eaed4bd049145cc5a2a2e10c1f424e77800affb7dc97f82931a6c3"} Jan 31 07:48:56 crc kubenswrapper[4826]: I0131 07:48:56.392229 4826 generic.go:334] "Generic (PLEG): container finished" podID="5033093f-2406-4bad-82e4-2b72dec635f5" containerID="f78339a9ab2907719afc1bceb7315360a7190f95b2b942756338bfeaf3bbf906" exitCode=0 Jan 31 07:48:56 crc kubenswrapper[4826]: I0131 07:48:56.392306 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" event={"ID":"5033093f-2406-4bad-82e4-2b72dec635f5","Type":"ContainerDied","Data":"f78339a9ab2907719afc1bceb7315360a7190f95b2b942756338bfeaf3bbf906"} Jan 31 07:48:57 crc kubenswrapper[4826]: I0131 07:48:57.377756 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:48:57 crc kubenswrapper[4826]: I0131 07:48:57.378205 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:48:57 crc kubenswrapper[4826]: I0131 07:48:57.409322 4826 generic.go:334] "Generic (PLEG): container finished" podID="5033093f-2406-4bad-82e4-2b72dec635f5" containerID="ecc148cc1600278b04439080accf6b2423c02f22c3d5d96ee6bda7b8760a9396" exitCode=0 Jan 31 07:48:57 crc kubenswrapper[4826]: I0131 07:48:57.409400 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" event={"ID":"5033093f-2406-4bad-82e4-2b72dec635f5","Type":"ContainerDied","Data":"ecc148cc1600278b04439080accf6b2423c02f22c3d5d96ee6bda7b8760a9396"} Jan 31 07:48:58 crc kubenswrapper[4826]: I0131 07:48:58.421719 4826 generic.go:334] "Generic (PLEG): container finished" podID="5033093f-2406-4bad-82e4-2b72dec635f5" containerID="7bb9405fe08b86b7846b6bd31416cdb9941467724927162cc3475523c6405095" exitCode=0 Jan 31 07:48:58 crc kubenswrapper[4826]: I0131 07:48:58.421762 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" event={"ID":"5033093f-2406-4bad-82e4-2b72dec635f5","Type":"ContainerDied","Data":"7bb9405fe08b86b7846b6bd31416cdb9941467724927162cc3475523c6405095"} Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.723060 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.775920 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-util\") pod \"5033093f-2406-4bad-82e4-2b72dec635f5\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.776027 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59rg7\" (UniqueName: \"kubernetes.io/projected/5033093f-2406-4bad-82e4-2b72dec635f5-kube-api-access-59rg7\") pod \"5033093f-2406-4bad-82e4-2b72dec635f5\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.776108 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-bundle\") pod \"5033093f-2406-4bad-82e4-2b72dec635f5\" (UID: \"5033093f-2406-4bad-82e4-2b72dec635f5\") " Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.776716 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-bundle" (OuterVolumeSpecName: "bundle") pod "5033093f-2406-4bad-82e4-2b72dec635f5" (UID: "5033093f-2406-4bad-82e4-2b72dec635f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.781240 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5033093f-2406-4bad-82e4-2b72dec635f5-kube-api-access-59rg7" (OuterVolumeSpecName: "kube-api-access-59rg7") pod "5033093f-2406-4bad-82e4-2b72dec635f5" (UID: "5033093f-2406-4bad-82e4-2b72dec635f5"). InnerVolumeSpecName "kube-api-access-59rg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.790636 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-util" (OuterVolumeSpecName: "util") pod "5033093f-2406-4bad-82e4-2b72dec635f5" (UID: "5033093f-2406-4bad-82e4-2b72dec635f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.876928 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-util\") on node \"crc\" DevicePath \"\"" Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.876984 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59rg7\" (UniqueName: \"kubernetes.io/projected/5033093f-2406-4bad-82e4-2b72dec635f5-kube-api-access-59rg7\") on node \"crc\" DevicePath \"\"" Jan 31 07:48:59 crc kubenswrapper[4826]: I0131 07:48:59.876999 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5033093f-2406-4bad-82e4-2b72dec635f5-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:49:00 crc kubenswrapper[4826]: I0131 07:49:00.441561 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" event={"ID":"5033093f-2406-4bad-82e4-2b72dec635f5","Type":"ContainerDied","Data":"7e96e14626eaed4bd049145cc5a2a2e10c1f424e77800affb7dc97f82931a6c3"} Jan 31 07:49:00 crc kubenswrapper[4826]: I0131 07:49:00.441608 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e96e14626eaed4bd049145cc5a2a2e10c1f424e77800affb7dc97f82931a6c3" Jan 31 07:49:00 crc kubenswrapper[4826]: I0131 07:49:00.441658 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.575700 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv"] Jan 31 07:49:06 crc kubenswrapper[4826]: E0131 07:49:06.576635 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="pull" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.576652 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="pull" Jan 31 07:49:06 crc kubenswrapper[4826]: E0131 07:49:06.576668 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="util" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.576676 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="util" Jan 31 07:49:06 crc kubenswrapper[4826]: E0131 07:49:06.576687 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="extract" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.576696 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="extract" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.576832 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5033093f-2406-4bad-82e4-2b72dec635f5" containerName="extract" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.577313 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:06 crc kubenswrapper[4826]: W0131 07:49:06.579483 4826 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rmmsf": failed to list *v1.Secret: secrets "openstack-operator-controller-init-dockercfg-rmmsf" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 31 07:49:06 crc kubenswrapper[4826]: E0131 07:49:06.579672 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-init-dockercfg-rmmsf\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-operator-controller-init-dockercfg-rmmsf\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.608794 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv"] Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.767477 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstwh\" (UniqueName: \"kubernetes.io/projected/3aa77b6e-e6e9-41c5-8217-10a290abd18a-kube-api-access-mstwh\") pod \"openstack-operator-controller-init-5ffcf8f8f6-8hdbv\" (UID: \"3aa77b6e-e6e9-41c5-8217-10a290abd18a\") " pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.868722 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mstwh\" (UniqueName: \"kubernetes.io/projected/3aa77b6e-e6e9-41c5-8217-10a290abd18a-kube-api-access-mstwh\") pod \"openstack-operator-controller-init-5ffcf8f8f6-8hdbv\" (UID: \"3aa77b6e-e6e9-41c5-8217-10a290abd18a\") " pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:06 crc kubenswrapper[4826]: I0131 07:49:06.887595 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstwh\" (UniqueName: \"kubernetes.io/projected/3aa77b6e-e6e9-41c5-8217-10a290abd18a-kube-api-access-mstwh\") pod \"openstack-operator-controller-init-5ffcf8f8f6-8hdbv\" (UID: \"3aa77b6e-e6e9-41c5-8217-10a290abd18a\") " pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:07 crc kubenswrapper[4826]: I0131 07:49:07.590914 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rmmsf" Jan 31 07:49:07 crc kubenswrapper[4826]: I0131 07:49:07.591742 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:08 crc kubenswrapper[4826]: I0131 07:49:08.274652 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv"] Jan 31 07:49:08 crc kubenswrapper[4826]: I0131 07:49:08.510473 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" event={"ID":"3aa77b6e-e6e9-41c5-8217-10a290abd18a","Type":"ContainerStarted","Data":"a91e3b18dbb3931b41295589c3860120e00bf05a6037b61068569977934cc4c7"} Jan 31 07:49:12 crc kubenswrapper[4826]: I0131 07:49:12.537130 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" event={"ID":"3aa77b6e-e6e9-41c5-8217-10a290abd18a","Type":"ContainerStarted","Data":"a6915058554bf99a93b9edb4aa1789ddb75649033a239fe4fc1f7d575d22de64"} Jan 31 07:49:12 crc kubenswrapper[4826]: I0131 07:49:12.537640 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:12 crc kubenswrapper[4826]: I0131 07:49:12.565854 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" podStartSLOduration=3.07109404 podStartE2EDuration="6.565838686s" podCreationTimestamp="2026-01-31 07:49:06 +0000 UTC" firstStartedPulling="2026-01-31 07:49:08.287046654 +0000 UTC m=+780.140933013" lastFinishedPulling="2026-01-31 07:49:11.78179128 +0000 UTC m=+783.635677659" observedRunningTime="2026-01-31 07:49:12.563021096 +0000 UTC m=+784.416907455" watchObservedRunningTime="2026-01-31 07:49:12.565838686 +0000 UTC m=+784.419725045" Jan 31 07:49:17 crc kubenswrapper[4826]: I0131 07:49:17.595725 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5ffcf8f8f6-8hdbv" Jan 31 07:49:27 crc kubenswrapper[4826]: I0131 07:49:27.376915 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:49:27 crc kubenswrapper[4826]: I0131 07:49:27.377572 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:49:27 crc kubenswrapper[4826]: I0131 07:49:27.377633 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:49:27 crc kubenswrapper[4826]: I0131 07:49:27.378396 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f90fb78fab9497ee3e1bd264894acb4bbed634bf52113ea4ba5640cbade7719"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:49:27 crc kubenswrapper[4826]: I0131 07:49:27.378452 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://9f90fb78fab9497ee3e1bd264894acb4bbed634bf52113ea4ba5640cbade7719" gracePeriod=600 Jan 31 07:49:28 crc kubenswrapper[4826]: I0131 07:49:28.635902 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="9f90fb78fab9497ee3e1bd264894acb4bbed634bf52113ea4ba5640cbade7719" exitCode=0 Jan 31 07:49:28 crc kubenswrapper[4826]: I0131 07:49:28.635982 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"9f90fb78fab9497ee3e1bd264894acb4bbed634bf52113ea4ba5640cbade7719"} Jan 31 07:49:28 crc kubenswrapper[4826]: I0131 07:49:28.636544 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"c93a1dd8075ab41380245ff46508ce96eef07210ffe502888ff235cf0e5e7fc4"} Jan 31 07:49:28 crc kubenswrapper[4826]: I0131 07:49:28.636572 4826 scope.go:117] "RemoveContainer" containerID="96981d99c8cc310fa2d207d4a923209b26ac15505f9c721483f2cdae8f82dca1" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.124243 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.132256 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.135065 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-h2clw" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.148793 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.158207 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.159209 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.162392 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-v4dnb" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.173328 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.174245 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.181593 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-95ld6" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.191774 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.205454 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.216596 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dz76\" (UniqueName: \"kubernetes.io/projected/4ad0581d-4c4f-45b8-b274-cba147fb1f0f-kube-api-access-6dz76\") pod \"cinder-operator-controller-manager-8d874c8fc-469df\" (UID: \"4ad0581d-4c4f-45b8-b274-cba147fb1f0f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.216652 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pc9p\" (UniqueName: \"kubernetes.io/projected/bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a-kube-api-access-2pc9p\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-fzkjg\" (UID: \"bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.221032 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-855qv"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.221850 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.225768 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jbzkc" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.240599 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.244209 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.245992 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-855qv"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.247231 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hjj9q" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.250446 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.253993 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.254742 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.260415 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-25mfh" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.271137 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.272413 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.275139 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.277371 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zw4gb" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.277556 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.279940 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.288266 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.289506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.293677 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c4t92" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.310105 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.312225 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.316243 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cqjrt" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.318153 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pc9p\" (UniqueName: \"kubernetes.io/projected/bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a-kube-api-access-2pc9p\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-fzkjg\" (UID: \"bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.318200 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27tj\" (UniqueName: \"kubernetes.io/projected/ae965697-1a1d-498a-be01-35faefac5df1-kube-api-access-p27tj\") pod \"horizon-operator-controller-manager-5fb775575f-c6qzg\" (UID: \"ae965697-1a1d-498a-be01-35faefac5df1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.318262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2kbx\" (UniqueName: \"kubernetes.io/projected/bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3-kube-api-access-n2kbx\") pod \"glance-operator-controller-manager-8886f4c47-855qv\" (UID: \"bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.318375 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns97q\" (UniqueName: \"kubernetes.io/projected/2ca6db75-d35b-4d27-afb7-45698c422257-kube-api-access-ns97q\") pod \"designate-operator-controller-manager-6d9697b7f4-vrsm8\" (UID: \"2ca6db75-d35b-4d27-afb7-45698c422257\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.318519 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r49m\" (UniqueName: \"kubernetes.io/projected/61b6715e-7da4-4f70-8e51-e4cc36c046f6-kube-api-access-6r49m\") pod \"heat-operator-controller-manager-69d6db494d-bd4z5\" (UID: \"61b6715e-7da4-4f70-8e51-e4cc36c046f6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.318717 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dz76\" (UniqueName: \"kubernetes.io/projected/4ad0581d-4c4f-45b8-b274-cba147fb1f0f-kube-api-access-6dz76\") pod \"cinder-operator-controller-manager-8d874c8fc-469df\" (UID: \"4ad0581d-4c4f-45b8-b274-cba147fb1f0f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.319326 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.323644 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.327300 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-krt6b" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.333081 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.342737 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.352697 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pc9p\" (UniqueName: \"kubernetes.io/projected/bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a-kube-api-access-2pc9p\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-fzkjg\" (UID: \"bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.354504 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dz76\" (UniqueName: \"kubernetes.io/projected/4ad0581d-4c4f-45b8-b274-cba147fb1f0f-kube-api-access-6dz76\") pod \"cinder-operator-controller-manager-8d874c8fc-469df\" (UID: \"4ad0581d-4c4f-45b8-b274-cba147fb1f0f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.368222 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.384004 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.384815 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.388880 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xh2sn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.406866 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x226q"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.407605 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.411799 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gpvq5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421096 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2psc\" (UniqueName: \"kubernetes.io/projected/19e5f098-b188-4252-8e1f-8db1f38dbb75-kube-api-access-j2psc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fbkx2\" (UID: \"19e5f098-b188-4252-8e1f-8db1f38dbb75\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421142 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2kbx\" (UniqueName: \"kubernetes.io/projected/bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3-kube-api-access-n2kbx\") pod \"glance-operator-controller-manager-8886f4c47-855qv\" (UID: \"bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421165 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns97q\" (UniqueName: \"kubernetes.io/projected/2ca6db75-d35b-4d27-afb7-45698c422257-kube-api-access-ns97q\") pod \"designate-operator-controller-manager-6d9697b7f4-vrsm8\" (UID: \"2ca6db75-d35b-4d27-afb7-45698c422257\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421208 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r49m\" (UniqueName: \"kubernetes.io/projected/61b6715e-7da4-4f70-8e51-e4cc36c046f6-kube-api-access-6r49m\") pod \"heat-operator-controller-manager-69d6db494d-bd4z5\" (UID: \"61b6715e-7da4-4f70-8e51-e4cc36c046f6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421229 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtcb\" (UniqueName: \"kubernetes.io/projected/f7d778f6-12f8-4d10-b106-579471ac576f-kube-api-access-cvtcb\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj89m\" (UniqueName: \"kubernetes.io/projected/516edc1f-8934-408f-a3f2-15e35f0de6bc-kube-api-access-zj89m\") pod \"manila-operator-controller-manager-7dd968899f-k9b2p\" (UID: \"516edc1f-8934-408f-a3f2-15e35f0de6bc\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421264 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421283 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r66s\" (UniqueName: \"kubernetes.io/projected/08a289e4-ca1e-4687-834a-941d23f7f292-kube-api-access-7r66s\") pod \"keystone-operator-controller-manager-84f48565d4-fn74b\" (UID: \"08a289e4-ca1e-4687-834a-941d23f7f292\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.421305 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27tj\" (UniqueName: \"kubernetes.io/projected/ae965697-1a1d-498a-be01-35faefac5df1-kube-api-access-p27tj\") pod \"horizon-operator-controller-manager-5fb775575f-c6qzg\" (UID: \"ae965697-1a1d-498a-be01-35faefac5df1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.425873 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x226q"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.449698 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.454648 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns97q\" (UniqueName: \"kubernetes.io/projected/2ca6db75-d35b-4d27-afb7-45698c422257-kube-api-access-ns97q\") pod \"designate-operator-controller-manager-6d9697b7f4-vrsm8\" (UID: \"2ca6db75-d35b-4d27-afb7-45698c422257\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.459478 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.460271 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.471274 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kdmbn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.471986 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.473247 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.486518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r49m\" (UniqueName: \"kubernetes.io/projected/61b6715e-7da4-4f70-8e51-e4cc36c046f6-kube-api-access-6r49m\") pod \"heat-operator-controller-manager-69d6db494d-bd4z5\" (UID: \"61b6715e-7da4-4f70-8e51-e4cc36c046f6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.486724 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2kbx\" (UniqueName: \"kubernetes.io/projected/bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3-kube-api-access-n2kbx\") pod \"glance-operator-controller-manager-8886f4c47-855qv\" (UID: \"bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.492325 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.495568 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27tj\" (UniqueName: \"kubernetes.io/projected/ae965697-1a1d-498a-be01-35faefac5df1-kube-api-access-p27tj\") pod \"horizon-operator-controller-manager-5fb775575f-c6qzg\" (UID: \"ae965697-1a1d-498a-be01-35faefac5df1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.506750 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.507570 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.508059 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.516796 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-shg4j" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.526959 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5jn\" (UniqueName: \"kubernetes.io/projected/8b2f20a7-2570-4a15-b86c-bdfdbd69c529-kube-api-access-6k5jn\") pod \"nova-operator-controller-manager-55bff696bd-p2jb4\" (UID: \"8b2f20a7-2570-4a15-b86c-bdfdbd69c529\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527026 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2psc\" (UniqueName: \"kubernetes.io/projected/19e5f098-b188-4252-8e1f-8db1f38dbb75-kube-api-access-j2psc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fbkx2\" (UID: \"19e5f098-b188-4252-8e1f-8db1f38dbb75\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs29q\" (UniqueName: \"kubernetes.io/projected/fbd1ba04-6613-4a12-9009-088ebba2e643-kube-api-access-qs29q\") pod \"neutron-operator-controller-manager-585dbc889-x226q\" (UID: \"fbd1ba04-6613-4a12-9009-088ebba2e643\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527108 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvfb\" (UniqueName: \"kubernetes.io/projected/34af55a7-61a5-41e7-a2da-7c631d075cb0-kube-api-access-ssvfb\") pod \"mariadb-operator-controller-manager-67bf948998-tmrnj\" (UID: \"34af55a7-61a5-41e7-a2da-7c631d075cb0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527147 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvtcb\" (UniqueName: \"kubernetes.io/projected/f7d778f6-12f8-4d10-b106-579471ac576f-kube-api-access-cvtcb\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527170 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj89m\" (UniqueName: \"kubernetes.io/projected/516edc1f-8934-408f-a3f2-15e35f0de6bc-kube-api-access-zj89m\") pod \"manila-operator-controller-manager-7dd968899f-k9b2p\" (UID: \"516edc1f-8934-408f-a3f2-15e35f0de6bc\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527192 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.527216 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r66s\" (UniqueName: \"kubernetes.io/projected/08a289e4-ca1e-4687-834a-941d23f7f292-kube-api-access-7r66s\") pod \"keystone-operator-controller-manager-84f48565d4-fn74b\" (UID: \"08a289e4-ca1e-4687-834a-941d23f7f292\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:49:37 crc kubenswrapper[4826]: E0131 07:49:37.527905 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:37 crc kubenswrapper[4826]: E0131 07:49:37.527954 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert podName:f7d778f6-12f8-4d10-b106-579471ac576f nodeName:}" failed. No retries permitted until 2026-01-31 07:49:38.027936992 +0000 UTC m=+809.881823351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert") pod "infra-operator-controller-manager-79955696d6-tdbcd" (UID: "f7d778f6-12f8-4d10-b106-579471ac576f") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.537516 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.552863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.559581 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2psc\" (UniqueName: \"kubernetes.io/projected/19e5f098-b188-4252-8e1f-8db1f38dbb75-kube-api-access-j2psc\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fbkx2\" (UID: \"19e5f098-b188-4252-8e1f-8db1f38dbb75\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.559910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r66s\" (UniqueName: \"kubernetes.io/projected/08a289e4-ca1e-4687-834a-941d23f7f292-kube-api-access-7r66s\") pod \"keystone-operator-controller-manager-84f48565d4-fn74b\" (UID: \"08a289e4-ca1e-4687-834a-941d23f7f292\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.563816 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.577907 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvtcb\" (UniqueName: \"kubernetes.io/projected/f7d778f6-12f8-4d10-b106-579471ac576f-kube-api-access-cvtcb\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.581592 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj89m\" (UniqueName: \"kubernetes.io/projected/516edc1f-8934-408f-a3f2-15e35f0de6bc-kube-api-access-zj89m\") pod \"manila-operator-controller-manager-7dd968899f-k9b2p\" (UID: \"516edc1f-8934-408f-a3f2-15e35f0de6bc\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.582023 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.627757 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.629396 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.633095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvfb\" (UniqueName: \"kubernetes.io/projected/34af55a7-61a5-41e7-a2da-7c631d075cb0-kube-api-access-ssvfb\") pod \"mariadb-operator-controller-manager-67bf948998-tmrnj\" (UID: \"34af55a7-61a5-41e7-a2da-7c631d075cb0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.633177 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r54z\" (UniqueName: \"kubernetes.io/projected/8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb-kube-api-access-5r54z\") pod \"octavia-operator-controller-manager-6687f8d877-7w6rc\" (UID: \"8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.633305 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k5jn\" (UniqueName: \"kubernetes.io/projected/8b2f20a7-2570-4a15-b86c-bdfdbd69c529-kube-api-access-6k5jn\") pod \"nova-operator-controller-manager-55bff696bd-p2jb4\" (UID: \"8b2f20a7-2570-4a15-b86c-bdfdbd69c529\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.633399 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4wwhl" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.633398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs29q\" (UniqueName: \"kubernetes.io/projected/fbd1ba04-6613-4a12-9009-088ebba2e643-kube-api-access-qs29q\") pod \"neutron-operator-controller-manager-585dbc889-x226q\" (UID: \"fbd1ba04-6613-4a12-9009-088ebba2e643\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.635170 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.650421 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.687691 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k5jn\" (UniqueName: \"kubernetes.io/projected/8b2f20a7-2570-4a15-b86c-bdfdbd69c529-kube-api-access-6k5jn\") pod \"nova-operator-controller-manager-55bff696bd-p2jb4\" (UID: \"8b2f20a7-2570-4a15-b86c-bdfdbd69c529\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.692393 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs29q\" (UniqueName: \"kubernetes.io/projected/fbd1ba04-6613-4a12-9009-088ebba2e643-kube-api-access-qs29q\") pod \"neutron-operator-controller-manager-585dbc889-x226q\" (UID: \"fbd1ba04-6613-4a12-9009-088ebba2e643\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.692478 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.692523 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvfb\" (UniqueName: \"kubernetes.io/projected/34af55a7-61a5-41e7-a2da-7c631d075cb0-kube-api-access-ssvfb\") pod \"mariadb-operator-controller-manager-67bf948998-tmrnj\" (UID: \"34af55a7-61a5-41e7-a2da-7c631d075cb0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.693137 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.693697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.711444 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p64pd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.711628 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.712457 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.713265 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.714448 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kfdh8" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.727213 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.747330 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.751415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.752222 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9sz\" (UniqueName: \"kubernetes.io/projected/999edaa2-f097-4789-a458-b309a42124a5-kube-api-access-2p9sz\") pod \"ovn-operator-controller-manager-788c46999f-gt2wd\" (UID: \"999edaa2-f097-4789-a458-b309a42124a5\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.752251 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r54z\" (UniqueName: \"kubernetes.io/projected/8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb-kube-api-access-5r54z\") pod \"octavia-operator-controller-manager-6687f8d877-7w6rc\" (UID: \"8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.753101 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.773668 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.782656 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.783513 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.798128 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-58ds5" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.799174 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.799427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r54z\" (UniqueName: \"kubernetes.io/projected/8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb-kube-api-access-5r54z\") pod \"octavia-operator-controller-manager-6687f8d877-7w6rc\" (UID: \"8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.801898 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.803068 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.809167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8zz6s" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.825031 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.848662 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-84d775b94d-x84xp"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.850034 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.865400 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.865440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9sz\" (UniqueName: \"kubernetes.io/projected/999edaa2-f097-4789-a458-b309a42124a5-kube-api-access-2p9sz\") pod \"ovn-operator-controller-manager-788c46999f-gt2wd\" (UID: \"999edaa2-f097-4789-a458-b309a42124a5\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.865496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27xnk\" (UniqueName: \"kubernetes.io/projected/91440ebf-dd66-4fd4-a4c4-b027138ad77c-kube-api-access-27xnk\") pod \"swift-operator-controller-manager-68fc8c869-qrp6z\" (UID: \"91440ebf-dd66-4fd4-a4c4-b027138ad77c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.865533 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q797f\" (UniqueName: \"kubernetes.io/projected/724a7dc5-6b24-44fa-a35a-4aea83f023c7-kube-api-access-q797f\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.865595 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74frl\" (UniqueName: \"kubernetes.io/projected/9e29b881-a227-4aef-888c-6676c6cf16b0-kube-api-access-74frl\") pod \"placement-operator-controller-manager-5b964cf4cd-r6php\" (UID: \"9e29b881-a227-4aef-888c-6676c6cf16b0\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.866871 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cfkpr" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.880808 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-84d775b94d-x84xp"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.903822 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9sz\" (UniqueName: \"kubernetes.io/projected/999edaa2-f097-4789-a458-b309a42124a5-kube-api-access-2p9sz\") pod \"ovn-operator-controller-manager-788c46999f-gt2wd\" (UID: \"999edaa2-f097-4789-a458-b309a42124a5\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.911888 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6jgcp"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.914175 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.916352 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bkvbb" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.932087 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6jgcp"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.934450 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.967724 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.968096 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27xnk\" (UniqueName: \"kubernetes.io/projected/91440ebf-dd66-4fd4-a4c4-b027138ad77c-kube-api-access-27xnk\") pod \"swift-operator-controller-manager-68fc8c869-qrp6z\" (UID: \"91440ebf-dd66-4fd4-a4c4-b027138ad77c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.968158 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q797f\" (UniqueName: \"kubernetes.io/projected/724a7dc5-6b24-44fa-a35a-4aea83f023c7-kube-api-access-q797f\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.968192 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7xlm\" (UniqueName: \"kubernetes.io/projected/76ca8b22-18bd-4ba3-9512-290a5165c6a7-kube-api-access-b7xlm\") pod \"telemetry-operator-controller-manager-64b5b76f97-mhhr4\" (UID: \"76ca8b22-18bd-4ba3-9512-290a5165c6a7\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.968272 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74frl\" (UniqueName: \"kubernetes.io/projected/9e29b881-a227-4aef-888c-6676c6cf16b0-kube-api-access-74frl\") pod \"placement-operator-controller-manager-5b964cf4cd-r6php\" (UID: \"9e29b881-a227-4aef-888c-6676c6cf16b0\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.968314 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2dz\" (UniqueName: \"kubernetes.io/projected/d14779ba-ccf1-4273-90ae-241c5c59c64f-kube-api-access-vt2dz\") pod \"test-operator-controller-manager-84d775b94d-x84xp\" (UID: \"d14779ba-ccf1-4273-90ae-241c5c59c64f\") " pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.968356 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:37 crc kubenswrapper[4826]: E0131 07:49:37.968456 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:37 crc kubenswrapper[4826]: E0131 07:49:37.968498 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert podName:724a7dc5-6b24-44fa-a35a-4aea83f023c7 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:38.468482807 +0000 UTC m=+810.322369176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" (UID: "724a7dc5-6b24-44fa-a35a-4aea83f023c7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.982014 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.982912 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.986770 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.987099 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z"] Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.987415 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.987511 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7n449" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.989659 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q797f\" (UniqueName: \"kubernetes.io/projected/724a7dc5-6b24-44fa-a35a-4aea83f023c7-kube-api-access-q797f\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.990484 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74frl\" (UniqueName: \"kubernetes.io/projected/9e29b881-a227-4aef-888c-6676c6cf16b0-kube-api-access-74frl\") pod \"placement-operator-controller-manager-5b964cf4cd-r6php\" (UID: \"9e29b881-a227-4aef-888c-6676c6cf16b0\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.992218 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27xnk\" (UniqueName: \"kubernetes.io/projected/91440ebf-dd66-4fd4-a4c4-b027138ad77c-kube-api-access-27xnk\") pod \"swift-operator-controller-manager-68fc8c869-qrp6z\" (UID: \"91440ebf-dd66-4fd4-a4c4-b027138ad77c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:49:37 crc kubenswrapper[4826]: I0131 07:49:37.995541 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.012309 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt"] Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.013583 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.017186 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rlgb5" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.018105 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt"] Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069331 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hr2\" (UniqueName: \"kubernetes.io/projected/bab047ef-9486-43b9-adad-edaefe7952b9-kube-api-access-s8hr2\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069621 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsvr\" (UniqueName: \"kubernetes.io/projected/f5191711-6f67-4b90-b21b-ee7e0acbd554-kube-api-access-4nsvr\") pod \"watcher-operator-controller-manager-564965969-6jgcp\" (UID: \"f5191711-6f67-4b90-b21b-ee7e0acbd554\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069643 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069676 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2dz\" (UniqueName: \"kubernetes.io/projected/d14779ba-ccf1-4273-90ae-241c5c59c64f-kube-api-access-vt2dz\") pod \"test-operator-controller-manager-84d775b94d-x84xp\" (UID: \"d14779ba-ccf1-4273-90ae-241c5c59c64f\") " pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069736 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.069768 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7xlm\" (UniqueName: \"kubernetes.io/projected/76ca8b22-18bd-4ba3-9512-290a5165c6a7-kube-api-access-b7xlm\") pod \"telemetry-operator-controller-manager-64b5b76f97-mhhr4\" (UID: \"76ca8b22-18bd-4ba3-9512-290a5165c6a7\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.070698 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.070751 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert podName:f7d778f6-12f8-4d10-b106-579471ac576f nodeName:}" failed. No retries permitted until 2026-01-31 07:49:39.070733639 +0000 UTC m=+810.924619998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert") pod "infra-operator-controller-manager-79955696d6-tdbcd" (UID: "f7d778f6-12f8-4d10-b106-579471ac576f") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.098750 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.102161 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7xlm\" (UniqueName: \"kubernetes.io/projected/76ca8b22-18bd-4ba3-9512-290a5165c6a7-kube-api-access-b7xlm\") pod \"telemetry-operator-controller-manager-64b5b76f97-mhhr4\" (UID: \"76ca8b22-18bd-4ba3-9512-290a5165c6a7\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.117539 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2dz\" (UniqueName: \"kubernetes.io/projected/d14779ba-ccf1-4273-90ae-241c5c59c64f-kube-api-access-vt2dz\") pod \"test-operator-controller-manager-84d775b94d-x84xp\" (UID: \"d14779ba-ccf1-4273-90ae-241c5c59c64f\") " pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.154320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.171283 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd4ks\" (UniqueName: \"kubernetes.io/projected/629d2057-3ccd-4983-882e-dde1edea2075-kube-api-access-wd4ks\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mwjwt\" (UID: \"629d2057-3ccd-4983-882e-dde1edea2075\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.171356 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hr2\" (UniqueName: \"kubernetes.io/projected/bab047ef-9486-43b9-adad-edaefe7952b9-kube-api-access-s8hr2\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.171489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.171544 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nsvr\" (UniqueName: \"kubernetes.io/projected/f5191711-6f67-4b90-b21b-ee7e0acbd554-kube-api-access-4nsvr\") pod \"watcher-operator-controller-manager-564965969-6jgcp\" (UID: \"f5191711-6f67-4b90-b21b-ee7e0acbd554\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.171586 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.171662 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.171734 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.171738 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:38.671713324 +0000 UTC m=+810.525599683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.171769 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:38.671759716 +0000 UTC m=+810.525646075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "metrics-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.187627 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nsvr\" (UniqueName: \"kubernetes.io/projected/f5191711-6f67-4b90-b21b-ee7e0acbd554-kube-api-access-4nsvr\") pod \"watcher-operator-controller-manager-564965969-6jgcp\" (UID: \"f5191711-6f67-4b90-b21b-ee7e0acbd554\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.190027 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hr2\" (UniqueName: \"kubernetes.io/projected/bab047ef-9486-43b9-adad-edaefe7952b9-kube-api-access-s8hr2\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.196978 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.215625 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.272710 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd4ks\" (UniqueName: \"kubernetes.io/projected/629d2057-3ccd-4983-882e-dde1edea2075-kube-api-access-wd4ks\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mwjwt\" (UID: \"629d2057-3ccd-4983-882e-dde1edea2075\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.299338 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd4ks\" (UniqueName: \"kubernetes.io/projected/629d2057-3ccd-4983-882e-dde1edea2075-kube-api-access-wd4ks\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mwjwt\" (UID: \"629d2057-3ccd-4983-882e-dde1edea2075\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.319300 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.351092 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.462677 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8"] Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.475218 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.475423 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.475484 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert podName:724a7dc5-6b24-44fa-a35a-4aea83f023c7 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:39.475462963 +0000 UTC m=+811.329349322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" (UID: "724a7dc5-6b24-44fa-a35a-4aea83f023c7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.475819 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df"] Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.489162 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg"] Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.697310 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.697801 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.698043 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.698142 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:39.698125383 +0000 UTC m=+811.552011742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "metrics-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.700155 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: E0131 07:49:38.700251 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:39.700227213 +0000 UTC m=+811.554113652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "webhook-server-cert" not found Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.718910 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg"] Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.724259 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5"] Jan 31 07:49:38 crc kubenswrapper[4826]: W0131 07:49:38.724706 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae965697_1a1d_498a_be01_35faefac5df1.slice/crio-a1a59bad9ca05cf3a056ed4bfd3bcb5eddaac750033f5e417b0303f7dfd407dc WatchSource:0}: Error finding container a1a59bad9ca05cf3a056ed4bfd3bcb5eddaac750033f5e417b0303f7dfd407dc: Status 404 returned error can't find the container with id a1a59bad9ca05cf3a056ed4bfd3bcb5eddaac750033f5e417b0303f7dfd407dc Jan 31 07:49:38 crc kubenswrapper[4826]: W0131 07:49:38.728071 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b6715e_7da4_4f70_8e51_e4cc36c046f6.slice/crio-cf7d7a11a424215af4389476880f7016ea33e18c9d16fa4375b95571837434e1 WatchSource:0}: Error finding container cf7d7a11a424215af4389476880f7016ea33e18c9d16fa4375b95571837434e1: Status 404 returned error can't find the container with id cf7d7a11a424215af4389476880f7016ea33e18c9d16fa4375b95571837434e1 Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.764217 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" event={"ID":"ae965697-1a1d-498a-be01-35faefac5df1","Type":"ContainerStarted","Data":"a1a59bad9ca05cf3a056ed4bfd3bcb5eddaac750033f5e417b0303f7dfd407dc"} Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.765822 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" event={"ID":"4ad0581d-4c4f-45b8-b274-cba147fb1f0f","Type":"ContainerStarted","Data":"45b8dad08c4a2bc204f28c39dedba053de8e4a285f5d0319c06afe1939164b7f"} Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.767740 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" event={"ID":"bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a","Type":"ContainerStarted","Data":"0caabdbcc9c2aaa96cc6ff2c3d26f76d7abfff87c1da0d31844a7f0a8ddb8608"} Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.769913 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" event={"ID":"61b6715e-7da4-4f70-8e51-e4cc36c046f6","Type":"ContainerStarted","Data":"cf7d7a11a424215af4389476880f7016ea33e18c9d16fa4375b95571837434e1"} Jan 31 07:49:38 crc kubenswrapper[4826]: I0131 07:49:38.771081 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" event={"ID":"2ca6db75-d35b-4d27-afb7-45698c422257","Type":"ContainerStarted","Data":"810c609c326471a3483a8a3799513957429a1fcd7b31918dc657ffab1844d5dc"} Jan 31 07:49:39 crc kubenswrapper[4826]: W0131 07:49:39.094676 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbd1ba04_6613_4a12_9009_088ebba2e643.slice/crio-d75a04b11e9049a9d07d795f1bdb8e07ac826c61baea14ceedcb9bca4e5f2aa2 WatchSource:0}: Error finding container d75a04b11e9049a9d07d795f1bdb8e07ac826c61baea14ceedcb9bca4e5f2aa2: Status 404 returned error can't find the container with id d75a04b11e9049a9d07d795f1bdb8e07ac826c61baea14ceedcb9bca4e5f2aa2 Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.099222 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x226q"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.102507 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.102649 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.102735 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert podName:f7d778f6-12f8-4d10-b106-579471ac576f nodeName:}" failed. No retries permitted until 2026-01-31 07:49:41.102715325 +0000 UTC m=+812.956601684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert") pod "infra-operator-controller-manager-79955696d6-tdbcd" (UID: "f7d778f6-12f8-4d10-b106-579471ac576f") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: W0131 07:49:39.109254 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b2f20a7_2570_4a15_b86c_bdfdbd69c529.slice/crio-add13904ef01aa85fcff0c7bb136dca5aad96400fb7e816f1a81499eb23c6eed WatchSource:0}: Error finding container add13904ef01aa85fcff0c7bb136dca5aad96400fb7e816f1a81499eb23c6eed: Status 404 returned error can't find the container with id add13904ef01aa85fcff0c7bb136dca5aad96400fb7e816f1a81499eb23c6eed Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.128123 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.141380 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.148286 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-855qv"] Jan 31 07:49:39 crc kubenswrapper[4826]: W0131 07:49:39.149365 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8240a7c4_2e26_46f9_9c1d_0d1d9951c2fb.slice/crio-08fb1c4e0643b225f758f913904040001276b437d4c7b2ef59d1d6c2b7bbe409 WatchSource:0}: Error finding container 08fb1c4e0643b225f758f913904040001276b437d4c7b2ef59d1d6c2b7bbe409: Status 404 returned error can't find the container with id 08fb1c4e0643b225f758f913904040001276b437d4c7b2ef59d1d6c2b7bbe409 Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.153771 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.161897 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.168245 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.175653 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj"] Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.175795 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7r66s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-fn74b_openstack-operators(08a289e4-ca1e-4687-834a-941d23f7f292): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.177064 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" podUID="08a289e4-ca1e-4687-834a-941d23f7f292" Jan 31 07:49:39 crc kubenswrapper[4826]: W0131 07:49:39.183612 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e29b881_a227_4aef_888c_6676c6cf16b0.slice/crio-8bbd3fa3d727587ce64357f96e316ad67744cc8025244a88f67f526c04760dd3 WatchSource:0}: Error finding container 8bbd3fa3d727587ce64357f96e316ad67744cc8025244a88f67f526c04760dd3: Status 404 returned error can't find the container with id 8bbd3fa3d727587ce64357f96e316ad67744cc8025244a88f67f526c04760dd3 Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.184650 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc"] Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.188110 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74frl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-r6php_openstack-operators(9e29b881-a227-4aef-888c-6676c6cf16b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.188830 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7xlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-mhhr4_openstack-operators(76ca8b22-18bd-4ba3-9512-290a5165c6a7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.189485 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" podUID="9e29b881-a227-4aef-888c-6676c6cf16b0" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.190155 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-84d775b94d-x84xp"] Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.190276 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" podUID="76ca8b22-18bd-4ba3-9512-290a5165c6a7" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.195326 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.245:5001/openstack-k8s-operators/test-operator:828743807b8b12aa7249432dc02b69062e96f024,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt2dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-84d775b94d-x84xp_openstack-operators(d14779ba-ccf1-4273-90ae-241c5c59c64f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.195799 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4"] Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.197059 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" podUID="d14779ba-ccf1-4273-90ae-241c5c59c64f" Jan 31 07:49:39 crc kubenswrapper[4826]: W0131 07:49:39.198177 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91440ebf_dd66_4fd4_a4c4_b027138ad77c.slice/crio-4b4faecd0a92c8aaf2a3ef59009eb4d8740271696fae966b69ef9edea0ab7c09 WatchSource:0}: Error finding container 4b4faecd0a92c8aaf2a3ef59009eb4d8740271696fae966b69ef9edea0ab7c09: Status 404 returned error can't find the container with id 4b4faecd0a92c8aaf2a3ef59009eb4d8740271696fae966b69ef9edea0ab7c09 Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.199798 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z"] Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.201079 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27xnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-qrp6z_openstack-operators(91440ebf-dd66-4fd4-a4c4-b027138ad77c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.203013 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" podUID="91440ebf-dd66-4fd4-a4c4-b027138ad77c" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.204055 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php"] Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.207091 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wd4ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mwjwt_openstack-operators(629d2057-3ccd-4983-882e-dde1edea2075): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.207120 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nsvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-6jgcp_openstack-operators(f5191711-6f67-4b90-b21b-ee7e0acbd554): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.208164 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" podUID="629d2057-3ccd-4983-882e-dde1edea2075" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.208198 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" podUID="f5191711-6f67-4b90-b21b-ee7e0acbd554" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.208263 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6jgcp"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.212405 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt"] Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.506928 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.507118 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.507200 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert podName:724a7dc5-6b24-44fa-a35a-4aea83f023c7 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:41.507180302 +0000 UTC m=+813.361066661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" (UID: "724a7dc5-6b24-44fa-a35a-4aea83f023c7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.708908 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.709064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.709165 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.709251 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:41.709232976 +0000 UTC m=+813.563119335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "metrics-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.709181 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.709450 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:41.709431462 +0000 UTC m=+813.563317821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "webhook-server-cert" not found Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.780618 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" event={"ID":"f5191711-6f67-4b90-b21b-ee7e0acbd554","Type":"ContainerStarted","Data":"6a2d2186ede148c4e4471a72d8bc71de67ab38d67cc4ae0a43c4f754c911eeea"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.782432 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" event={"ID":"34af55a7-61a5-41e7-a2da-7c631d075cb0","Type":"ContainerStarted","Data":"55eaae49f0e98a124d552895b08221db4d82b5911a6c7708f3cae937831ca9f6"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.784486 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" podUID="f5191711-6f67-4b90-b21b-ee7e0acbd554" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.786729 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" event={"ID":"999edaa2-f097-4789-a458-b309a42124a5","Type":"ContainerStarted","Data":"982b85eaa805e6923b1287ab40844f80137daeb3da650ef9254ee7e59da8a18f"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.788350 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" event={"ID":"9e29b881-a227-4aef-888c-6676c6cf16b0","Type":"ContainerStarted","Data":"8bbd3fa3d727587ce64357f96e316ad67744cc8025244a88f67f526c04760dd3"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.789309 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" event={"ID":"8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb","Type":"ContainerStarted","Data":"08fb1c4e0643b225f758f913904040001276b437d4c7b2ef59d1d6c2b7bbe409"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.790679 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" event={"ID":"fbd1ba04-6613-4a12-9009-088ebba2e643","Type":"ContainerStarted","Data":"d75a04b11e9049a9d07d795f1bdb8e07ac826c61baea14ceedcb9bca4e5f2aa2"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.791744 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" podUID="9e29b881-a227-4aef-888c-6676c6cf16b0" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.792014 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" event={"ID":"516edc1f-8934-408f-a3f2-15e35f0de6bc","Type":"ContainerStarted","Data":"bd63c0c8fdaecc10570c50376ab0ee7729c4adc35fcb217ece4c8a9850b4038e"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.796193 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" event={"ID":"91440ebf-dd66-4fd4-a4c4-b027138ad77c","Type":"ContainerStarted","Data":"4b4faecd0a92c8aaf2a3ef59009eb4d8740271696fae966b69ef9edea0ab7c09"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.799667 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" podUID="91440ebf-dd66-4fd4-a4c4-b027138ad77c" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.800529 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" event={"ID":"629d2057-3ccd-4983-882e-dde1edea2075","Type":"ContainerStarted","Data":"f7613f611453a199f8af1908a7f7a36fbc5442340e50a07fe00731a8e9b7251d"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.801946 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" podUID="629d2057-3ccd-4983-882e-dde1edea2075" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.804821 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" event={"ID":"19e5f098-b188-4252-8e1f-8db1f38dbb75","Type":"ContainerStarted","Data":"6bb2c108317ca1f9567481c8aef0a378bcf0978c3ce7ab32ab27201f750e4fc7"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.807579 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" event={"ID":"d14779ba-ccf1-4273-90ae-241c5c59c64f","Type":"ContainerStarted","Data":"2f4929fdb0130b8dfb02c1a8a58d412c938a5e6954486376903320d0e67cc274"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.809038 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.245:5001/openstack-k8s-operators/test-operator:828743807b8b12aa7249432dc02b69062e96f024\\\"\"" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" podUID="d14779ba-ccf1-4273-90ae-241c5c59c64f" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.810029 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" event={"ID":"08a289e4-ca1e-4687-834a-941d23f7f292","Type":"ContainerStarted","Data":"60b608a9abc92992496e8578522fd0d03c89b49292e37090e6eec4e3df59bf21"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.813548 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" podUID="08a289e4-ca1e-4687-834a-941d23f7f292" Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.814141 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" event={"ID":"8b2f20a7-2570-4a15-b86c-bdfdbd69c529","Type":"ContainerStarted","Data":"add13904ef01aa85fcff0c7bb136dca5aad96400fb7e816f1a81499eb23c6eed"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.815683 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" event={"ID":"bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3","Type":"ContainerStarted","Data":"d330437a9c2a6d52de6f2efcdb7b9a7df8cfb2d71a1cd6b518056c2318433b92"} Jan 31 07:49:39 crc kubenswrapper[4826]: I0131 07:49:39.816939 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" event={"ID":"76ca8b22-18bd-4ba3-9512-290a5165c6a7","Type":"ContainerStarted","Data":"ed00ba53713d5811091dfb714f055732429a3e701ba09cdb2ba2fc121c9e3c2d"} Jan 31 07:49:39 crc kubenswrapper[4826]: E0131 07:49:39.818242 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" podUID="76ca8b22-18bd-4ba3-9512-290a5165c6a7" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.830940 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" podUID="f5191711-6f67-4b90-b21b-ee7e0acbd554" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.830990 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.245:5001/openstack-k8s-operators/test-operator:828743807b8b12aa7249432dc02b69062e96f024\\\"\"" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" podUID="d14779ba-ccf1-4273-90ae-241c5c59c64f" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.831102 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" podUID="08a289e4-ca1e-4687-834a-941d23f7f292" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.831164 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" podUID="9e29b881-a227-4aef-888c-6676c6cf16b0" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.831170 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" podUID="91440ebf-dd66-4fd4-a4c4-b027138ad77c" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.831196 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" podUID="629d2057-3ccd-4983-882e-dde1edea2075" Jan 31 07:49:40 crc kubenswrapper[4826]: E0131 07:49:40.831218 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" podUID="76ca8b22-18bd-4ba3-9512-290a5165c6a7" Jan 31 07:49:41 crc kubenswrapper[4826]: I0131 07:49:41.134222 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.134393 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.134465 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert podName:f7d778f6-12f8-4d10-b106-579471ac576f nodeName:}" failed. No retries permitted until 2026-01-31 07:49:45.13444758 +0000 UTC m=+816.988333929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert") pod "infra-operator-controller-manager-79955696d6-tdbcd" (UID: "f7d778f6-12f8-4d10-b106-579471ac576f") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: I0131 07:49:41.541637 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.541842 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.541924 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert podName:724a7dc5-6b24-44fa-a35a-4aea83f023c7 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:45.541905693 +0000 UTC m=+817.395792052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" (UID: "724a7dc5-6b24-44fa-a35a-4aea83f023c7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: I0131 07:49:41.744312 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:41 crc kubenswrapper[4826]: I0131 07:49:41.744421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.744523 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.744601 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:45.744582675 +0000 UTC m=+817.598469034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "webhook-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.744732 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:49:41 crc kubenswrapper[4826]: E0131 07:49:41.744837 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:45.744814101 +0000 UTC m=+817.598700480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "metrics-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: I0131 07:49:45.194213 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.194447 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.194880 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert podName:f7d778f6-12f8-4d10-b106-579471ac576f nodeName:}" failed. No retries permitted until 2026-01-31 07:49:53.194863124 +0000 UTC m=+825.048749483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert") pod "infra-operator-controller-manager-79955696d6-tdbcd" (UID: "f7d778f6-12f8-4d10-b106-579471ac576f") : secret "infra-operator-webhook-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: I0131 07:49:45.599402 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.599578 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.599630 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert podName:724a7dc5-6b24-44fa-a35a-4aea83f023c7 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:53.599615229 +0000 UTC m=+825.453501588 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" (UID: "724a7dc5-6b24-44fa-a35a-4aea83f023c7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: I0131 07:49:45.801330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:45 crc kubenswrapper[4826]: I0131 07:49:45.801389 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.801710 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.801748 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.801792 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:53.801770616 +0000 UTC m=+825.655656985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "webhook-server-cert" not found Jan 31 07:49:45 crc kubenswrapper[4826]: E0131 07:49:45.801829 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:49:53.801811597 +0000 UTC m=+825.655697956 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "metrics-server-cert" not found Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.254886 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.273634 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d778f6-12f8-4d10-b106-579471ac576f-cert\") pod \"infra-operator-controller-manager-79955696d6-tdbcd\" (UID: \"f7d778f6-12f8-4d10-b106-579471ac576f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.433882 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.434421 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zj89m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-k9b2p_openstack-operators(516edc1f-8934-408f-a3f2-15e35f0de6bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.436013 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" podUID="516edc1f-8934-408f-a3f2-15e35f0de6bc" Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.501807 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.662996 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.663181 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.663262 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert podName:724a7dc5-6b24-44fa-a35a-4aea83f023c7 nodeName:}" failed. No retries permitted until 2026-01-31 07:50:09.663242139 +0000 UTC m=+841.517128498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" (UID: "724a7dc5-6b24-44fa-a35a-4aea83f023c7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.865753 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.865807 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.866065 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.866235 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs podName:bab047ef-9486-43b9-adad-edaefe7952b9 nodeName:}" failed. No retries permitted until 2026-01-31 07:50:09.866204889 +0000 UTC m=+841.720091258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs") pod "openstack-operator-controller-manager-656b74655f-lrz2z" (UID: "bab047ef-9486-43b9-adad-edaefe7952b9") : secret "webhook-server-cert" not found Jan 31 07:49:53 crc kubenswrapper[4826]: I0131 07:49:53.876390 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-metrics-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.920567 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.920737 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ns97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-vrsm8_openstack-operators(2ca6db75-d35b-4d27-afb7-45698c422257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.922222 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" podUID="2ca6db75-d35b-4d27-afb7-45698c422257" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.928154 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" podUID="2ca6db75-d35b-4d27-afb7-45698c422257" Jan 31 07:49:53 crc kubenswrapper[4826]: E0131 07:49:53.928705 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" podUID="516edc1f-8934-408f-a3f2-15e35f0de6bc" Jan 31 07:49:54 crc kubenswrapper[4826]: E0131 07:49:54.516714 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 31 07:49:54 crc kubenswrapper[4826]: E0131 07:49:54.516868 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2psc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-fbkx2_openstack-operators(19e5f098-b188-4252-8e1f-8db1f38dbb75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:54 crc kubenswrapper[4826]: E0131 07:49:54.518941 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" podUID="19e5f098-b188-4252-8e1f-8db1f38dbb75" Jan 31 07:49:54 crc kubenswrapper[4826]: E0131 07:49:54.932879 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" podUID="19e5f098-b188-4252-8e1f-8db1f38dbb75" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.058947 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.059788 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2p9sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-gt2wd_openstack-operators(999edaa2-f097-4789-a458-b309a42124a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.061592 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" podUID="999edaa2-f097-4789-a458-b309a42124a5" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.503325 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.503507 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r54z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-7w6rc_openstack-operators(8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.504653 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" podUID="8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.938997 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" podUID="999edaa2-f097-4789-a458-b309a42124a5" Jan 31 07:49:55 crc kubenswrapper[4826]: E0131 07:49:55.939417 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" podUID="8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.038065 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.038264 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qs29q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-x226q_openstack-operators(fbd1ba04-6613-4a12-9009-088ebba2e643): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.040156 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" podUID="fbd1ba04-6613-4a12-9009-088ebba2e643" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.697152 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.697623 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k5jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-p2jb4_openstack-operators(8b2f20a7-2570-4a15-b86c-bdfdbd69c529): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.698810 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" podUID="8b2f20a7-2570-4a15-b86c-bdfdbd69c529" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.944759 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" podUID="fbd1ba04-6613-4a12-9009-088ebba2e643" Jan 31 07:49:56 crc kubenswrapper[4826]: E0131 07:49:56.944986 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" podUID="8b2f20a7-2570-4a15-b86c-bdfdbd69c529" Jan 31 07:49:59 crc kubenswrapper[4826]: I0131 07:49:59.060330 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd"] Jan 31 07:50:02 crc kubenswrapper[4826]: I0131 07:50:02.993099 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" event={"ID":"f7d778f6-12f8-4d10-b106-579471ac576f","Type":"ContainerStarted","Data":"393f9ba40abb78bc4e399e1b955dc9efb24d819162fe6cce578a9a6949244e01"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.000669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" event={"ID":"ae965697-1a1d-498a-be01-35faefac5df1","Type":"ContainerStarted","Data":"d3b5dba8ec5b42ed5e0caefae777eb0b1ca538cd622e9877ca5f27821da05a07"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.001278 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.005515 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" event={"ID":"d14779ba-ccf1-4273-90ae-241c5c59c64f","Type":"ContainerStarted","Data":"0275ef058dbbb26c1739cc271b1eef6430286522ef1ec248f1837de85291de25"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.007386 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" event={"ID":"91440ebf-dd66-4fd4-a4c4-b027138ad77c","Type":"ContainerStarted","Data":"22dd902883e5b73c7030ef4d1c83ae721ae6d101c234737d9c66babad1222a86"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.008884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" event={"ID":"08a289e4-ca1e-4687-834a-941d23f7f292","Type":"ContainerStarted","Data":"649d4417ae7c74e57cfa69d91236600cc9a93e4dde21e7fb3c446fbe2c56fbf3"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.010006 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" event={"ID":"76ca8b22-18bd-4ba3-9512-290a5165c6a7","Type":"ContainerStarted","Data":"251f14647e38a370fe38eb86b8bbb31c5869fd239edeb03d94f61ce8ca764632"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.010651 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.014487 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" event={"ID":"34af55a7-61a5-41e7-a2da-7c631d075cb0","Type":"ContainerStarted","Data":"65d73c3c6c0f39b74e094d2877c17ba8590b933044bf5af5e7635085c1f3bfa9"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.015076 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.016616 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" event={"ID":"61b6715e-7da4-4f70-8e51-e4cc36c046f6","Type":"ContainerStarted","Data":"e5bcfaee53e581253721b0f648c49230c0a2437ebb407f3ac2631e476842e8a5"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.017943 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" event={"ID":"629d2057-3ccd-4983-882e-dde1edea2075","Type":"ContainerStarted","Data":"2c1ecc57f2e7df250b89af8a85258e2a498ad56565d7cf54cf40662d3c9e6ee8"} Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.042332 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" podStartSLOduration=10.280749781 podStartE2EDuration="27.042313592s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:38.726645476 +0000 UTC m=+810.580531835" lastFinishedPulling="2026-01-31 07:49:55.488209287 +0000 UTC m=+827.342095646" observedRunningTime="2026-01-31 07:50:04.024554146 +0000 UTC m=+835.878440505" watchObservedRunningTime="2026-01-31 07:50:04.042313592 +0000 UTC m=+835.896199941" Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.042608 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" podStartSLOduration=7.5453734390000005 podStartE2EDuration="27.04260308s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.134717356 +0000 UTC m=+810.988603715" lastFinishedPulling="2026-01-31 07:49:58.631946997 +0000 UTC m=+830.485833356" observedRunningTime="2026-01-31 07:50:04.038162014 +0000 UTC m=+835.892048393" watchObservedRunningTime="2026-01-31 07:50:04.04260308 +0000 UTC m=+835.896489429" Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.061760 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" podStartSLOduration=2.957873785 podStartE2EDuration="27.061743665s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.188765245 +0000 UTC m=+811.042651604" lastFinishedPulling="2026-01-31 07:50:03.292635125 +0000 UTC m=+835.146521484" observedRunningTime="2026-01-31 07:50:04.06050935 +0000 UTC m=+835.914395709" watchObservedRunningTime="2026-01-31 07:50:04.061743665 +0000 UTC m=+835.915630024" Jan 31 07:50:04 crc kubenswrapper[4826]: I0131 07:50:04.099746 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mwjwt" podStartSLOduration=3.052619253 podStartE2EDuration="27.099729397s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.206956093 +0000 UTC m=+811.060842452" lastFinishedPulling="2026-01-31 07:50:03.254066207 +0000 UTC m=+835.107952596" observedRunningTime="2026-01-31 07:50:04.097982977 +0000 UTC m=+835.951869336" watchObservedRunningTime="2026-01-31 07:50:04.099729397 +0000 UTC m=+835.953615756" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.024995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" event={"ID":"bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3","Type":"ContainerStarted","Data":"9b1829de8eced2f6030527e56da321c9a0513db04d81f4868e636023d4b9f8a3"} Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.025313 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.029735 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" event={"ID":"f5191711-6f67-4b90-b21b-ee7e0acbd554","Type":"ContainerStarted","Data":"81c6437ab7ea0f4b30a1e75b6c3fdacc605d03ba6f5625eeefaa8e4c1eae7e10"} Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.030349 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.033048 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" event={"ID":"4ad0581d-4c4f-45b8-b274-cba147fb1f0f","Type":"ContainerStarted","Data":"f072f7d048f6c73e0a88b5dd8227d8adefd03530bb07d60e655b553c4a4b072f"} Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.033172 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.034456 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" event={"ID":"bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a","Type":"ContainerStarted","Data":"aed1477941205690c8f58c220a22231617e42461db55404333eb47f9122a3d41"} Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.034886 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.036761 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" event={"ID":"9e29b881-a227-4aef-888c-6676c6cf16b0","Type":"ContainerStarted","Data":"895dcaa30dbae42851f16945768db9ae9ba6d8febb5f3ab93c493291311cd156"} Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.036792 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.037726 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.038195 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.038447 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.038593 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.049436 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" podStartSLOduration=10.493883971 podStartE2EDuration="28.049415391s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.129011333 +0000 UTC m=+810.982897692" lastFinishedPulling="2026-01-31 07:49:56.684542753 +0000 UTC m=+828.538429112" observedRunningTime="2026-01-31 07:50:05.0427383 +0000 UTC m=+836.896624669" watchObservedRunningTime="2026-01-31 07:50:05.049415391 +0000 UTC m=+836.903301750" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.070660 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" podStartSLOduration=9.953858433 podStartE2EDuration="28.070639955s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:38.56772907 +0000 UTC m=+810.421615429" lastFinishedPulling="2026-01-31 07:49:56.684510592 +0000 UTC m=+828.538396951" observedRunningTime="2026-01-31 07:50:05.068157834 +0000 UTC m=+836.922044213" watchObservedRunningTime="2026-01-31 07:50:05.070639955 +0000 UTC m=+836.924526314" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.093164 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" podStartSLOduration=4.048489202 podStartE2EDuration="28.093142566s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.206950643 +0000 UTC m=+811.060837002" lastFinishedPulling="2026-01-31 07:50:03.251603967 +0000 UTC m=+835.105490366" observedRunningTime="2026-01-31 07:50:05.088480693 +0000 UTC m=+836.942367052" watchObservedRunningTime="2026-01-31 07:50:05.093142566 +0000 UTC m=+836.947028925" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.113385 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" podStartSLOduration=4.021555445 podStartE2EDuration="28.113368822s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.200929981 +0000 UTC m=+811.054816340" lastFinishedPulling="2026-01-31 07:50:03.292743358 +0000 UTC m=+835.146629717" observedRunningTime="2026-01-31 07:50:05.11050683 +0000 UTC m=+836.964393199" watchObservedRunningTime="2026-01-31 07:50:05.113368822 +0000 UTC m=+836.967255181" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.138498 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" podStartSLOduration=10.677846869 podStartE2EDuration="28.138473267s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:38.559885567 +0000 UTC m=+810.413771916" lastFinishedPulling="2026-01-31 07:49:56.020511955 +0000 UTC m=+827.874398314" observedRunningTime="2026-01-31 07:50:05.13088555 +0000 UTC m=+836.984771909" watchObservedRunningTime="2026-01-31 07:50:05.138473267 +0000 UTC m=+836.992359626" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.164779 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" podStartSLOduration=4.06143039 podStartE2EDuration="28.164757275s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.187995113 +0000 UTC m=+811.041881472" lastFinishedPulling="2026-01-31 07:50:03.291321998 +0000 UTC m=+835.145208357" observedRunningTime="2026-01-31 07:50:05.152427844 +0000 UTC m=+837.006314203" watchObservedRunningTime="2026-01-31 07:50:05.164757275 +0000 UTC m=+837.018643634" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.193362 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" podStartSLOduration=4.077431996 podStartE2EDuration="28.193338169s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.17560657 +0000 UTC m=+811.029492929" lastFinishedPulling="2026-01-31 07:50:03.291512743 +0000 UTC m=+835.145399102" observedRunningTime="2026-01-31 07:50:05.192295279 +0000 UTC m=+837.046181648" watchObservedRunningTime="2026-01-31 07:50:05.193338169 +0000 UTC m=+837.047224588" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.223882 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" podStartSLOduration=10.270688095 podStartE2EDuration="28.223858898s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:38.731298228 +0000 UTC m=+810.585184587" lastFinishedPulling="2026-01-31 07:49:56.684469041 +0000 UTC m=+828.538355390" observedRunningTime="2026-01-31 07:50:05.221336326 +0000 UTC m=+837.075222695" watchObservedRunningTime="2026-01-31 07:50:05.223858898 +0000 UTC m=+837.077745247" Jan 31 07:50:05 crc kubenswrapper[4826]: I0131 07:50:05.256336 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" podStartSLOduration=4.089034457 podStartE2EDuration="28.256307602s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.195211639 +0000 UTC m=+811.049097998" lastFinishedPulling="2026-01-31 07:50:03.362484784 +0000 UTC m=+835.216371143" observedRunningTime="2026-01-31 07:50:05.246588015 +0000 UTC m=+837.100474374" watchObservedRunningTime="2026-01-31 07:50:05.256307602 +0000 UTC m=+837.110193961" Jan 31 07:50:08 crc kubenswrapper[4826]: I0131 07:50:08.158228 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-qrp6z" Jan 31 07:50:08 crc kubenswrapper[4826]: I0131 07:50:08.199570 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-mhhr4" Jan 31 07:50:08 crc kubenswrapper[4826]: I0131 07:50:08.220336 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-84d775b94d-x84xp" Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.216564 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" event={"ID":"f7d778f6-12f8-4d10-b106-579471ac576f","Type":"ContainerStarted","Data":"55e0222928edcc01e4dcafbdf7640e7ad2944d5166ca4bfa3158f9e7c4cf99fd"} Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.692920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.701294 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/724a7dc5-6b24-44fa-a35a-4aea83f023c7-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn\" (UID: \"724a7dc5-6b24-44fa-a35a-4aea83f023c7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.864007 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p64pd" Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.872545 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.895317 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:50:09 crc kubenswrapper[4826]: I0131 07:50:09.901780 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bab047ef-9486-43b9-adad-edaefe7952b9-webhook-certs\") pod \"openstack-operator-controller-manager-656b74655f-lrz2z\" (UID: \"bab047ef-9486-43b9-adad-edaefe7952b9\") " pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.140870 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7n449" Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.149535 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.224211 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" event={"ID":"2ca6db75-d35b-4d27-afb7-45698c422257","Type":"ContainerStarted","Data":"9f96bb3d8b1b0d20f1025b585c441c974b16021c34109a0f5d21f439bbc9ac92"} Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.224348 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.224638 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.258507 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" podStartSLOduration=3.897292928 podStartE2EDuration="33.258481534s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:38.561010909 +0000 UTC m=+810.414897268" lastFinishedPulling="2026-01-31 07:50:07.922199515 +0000 UTC m=+839.776085874" observedRunningTime="2026-01-31 07:50:10.247323966 +0000 UTC m=+842.101210325" watchObservedRunningTime="2026-01-31 07:50:10.258481534 +0000 UTC m=+842.112367933" Jan 31 07:50:10 crc kubenswrapper[4826]: I0131 07:50:10.267500 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" podStartSLOduration=27.437165516 podStartE2EDuration="33.267456309s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:50:02.090304727 +0000 UTC m=+833.944191086" lastFinishedPulling="2026-01-31 07:50:07.92059552 +0000 UTC m=+839.774481879" observedRunningTime="2026-01-31 07:50:10.263815956 +0000 UTC m=+842.117702335" watchObservedRunningTime="2026-01-31 07:50:10.267456309 +0000 UTC m=+842.121342668" Jan 31 07:50:13 crc kubenswrapper[4826]: I0131 07:50:13.509612 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tdbcd" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.476149 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fzkjg" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.496534 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-469df" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.511626 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vrsm8" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.562308 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-855qv" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.575872 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bd4z5" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.587312 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-c6qzg" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.653533 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-fn74b" Jan 31 07:50:17 crc kubenswrapper[4826]: I0131 07:50:17.755468 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tmrnj" Jan 31 07:50:18 crc kubenswrapper[4826]: I0131 07:50:18.107637 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-r6php" Jan 31 07:50:18 crc kubenswrapper[4826]: I0131 07:50:18.324432 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-6jgcp" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.199709 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-754qf"] Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.201601 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.214624 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-754qf"] Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.330072 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdxk\" (UniqueName: \"kubernetes.io/projected/e25f6c60-5436-48b6-9f75-06182dbf2916-kube-api-access-ngdxk\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.330137 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-catalog-content\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.330182 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-utilities\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.432115 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdxk\" (UniqueName: \"kubernetes.io/projected/e25f6c60-5436-48b6-9f75-06182dbf2916-kube-api-access-ngdxk\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.432194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-catalog-content\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.432234 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-utilities\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.432785 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-utilities\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.432915 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-catalog-content\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.453368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdxk\" (UniqueName: \"kubernetes.io/projected/e25f6c60-5436-48b6-9f75-06182dbf2916-kube-api-access-ngdxk\") pod \"redhat-marketplace-754qf\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:25 crc kubenswrapper[4826]: I0131 07:50:25.519307 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.193029 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4ngjq"] Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.195836 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.214167 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4ngjq"] Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.275131 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88p6\" (UniqueName: \"kubernetes.io/projected/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-kube-api-access-h88p6\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.275220 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-catalog-content\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.275255 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-utilities\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.376920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-catalog-content\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.377313 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-utilities\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.377393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88p6\" (UniqueName: \"kubernetes.io/projected/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-kube-api-access-h88p6\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.377892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-utilities\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.377533 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-catalog-content\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.399150 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88p6\" (UniqueName: \"kubernetes.io/projected/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-kube-api-access-h88p6\") pod \"community-operators-4ngjq\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:28 crc kubenswrapper[4826]: I0131 07:50:28.514260 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.185088 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6rfl"] Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.187064 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.200701 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6rfl"] Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.316703 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-utilities\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.316782 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24zs\" (UniqueName: \"kubernetes.io/projected/d735d132-571e-4619-9547-ede5670f6859-kube-api-access-v24zs\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.317014 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-catalog-content\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.419156 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24zs\" (UniqueName: \"kubernetes.io/projected/d735d132-571e-4619-9547-ede5670f6859-kube-api-access-v24zs\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.419286 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-catalog-content\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.419390 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-utilities\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.419941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-catalog-content\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.420240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-utilities\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.436547 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24zs\" (UniqueName: \"kubernetes.io/projected/d735d132-571e-4619-9547-ede5670f6859-kube-api-access-v24zs\") pod \"redhat-operators-p6rfl\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:31 crc kubenswrapper[4826]: I0131 07:50:31.507041 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:39 crc kubenswrapper[4826]: I0131 07:50:39.199899 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.227632 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6rfl"] Jan 31 07:50:40 crc kubenswrapper[4826]: W0131 07:50:40.241908 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd735d132_571e_4619_9547_ede5670f6859.slice/crio-8cff5b266d14384e25e2bffb464c4d43c4975daba9ed0a8f778016541b924d8d WatchSource:0}: Error finding container 8cff5b266d14384e25e2bffb464c4d43c4975daba9ed0a8f778016541b924d8d: Status 404 returned error can't find the container with id 8cff5b266d14384e25e2bffb464c4d43c4975daba9ed0a8f778016541b924d8d Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.250407 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn"] Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.331959 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-754qf"] Jan 31 07:50:40 crc kubenswrapper[4826]: W0131 07:50:40.333267 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25f6c60_5436_48b6_9f75_06182dbf2916.slice/crio-0fd5b362f89816886f1d7484d4c2dd82bea07625bf96f9d7246f0966c10ed2f2 WatchSource:0}: Error finding container 0fd5b362f89816886f1d7484d4c2dd82bea07625bf96f9d7246f0966c10ed2f2: Status 404 returned error can't find the container with id 0fd5b362f89816886f1d7484d4c2dd82bea07625bf96f9d7246f0966c10ed2f2 Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.342777 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z"] Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.353884 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4ngjq"] Jan 31 07:50:40 crc kubenswrapper[4826]: W0131 07:50:40.366239 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab047ef_9486_43b9_adad_edaefe7952b9.slice/crio-2fff132913f57748f1f598a7dd8482323c5edbe1542618e9885d4cc53ba563b6 WatchSource:0}: Error finding container 2fff132913f57748f1f598a7dd8482323c5edbe1542618e9885d4cc53ba563b6: Status 404 returned error can't find the container with id 2fff132913f57748f1f598a7dd8482323c5edbe1542618e9885d4cc53ba563b6 Jan 31 07:50:40 crc kubenswrapper[4826]: W0131 07:50:40.379314 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a12baae_b5d3_443a_8f2a_60ab3c2c3d73.slice/crio-ebc7000abdcbe2068eba4679a7f6ffbbbf0306555d28abc7d9ebc4606f0231e3 WatchSource:0}: Error finding container ebc7000abdcbe2068eba4679a7f6ffbbbf0306555d28abc7d9ebc4606f0231e3: Status 404 returned error can't find the container with id ebc7000abdcbe2068eba4679a7f6ffbbbf0306555d28abc7d9ebc4606f0231e3 Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.475425 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ngjq" event={"ID":"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73","Type":"ContainerStarted","Data":"ebc7000abdcbe2068eba4679a7f6ffbbbf0306555d28abc7d9ebc4606f0231e3"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.480387 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" event={"ID":"724a7dc5-6b24-44fa-a35a-4aea83f023c7","Type":"ContainerStarted","Data":"292aa5cdae381ddf0dc7171499620c002aa1131cb3970ad2d3c8ccd9c9d89459"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.484827 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" event={"ID":"8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb","Type":"ContainerStarted","Data":"d05366f12f21674530cc1414420ac17852a51f3435ecdb337c1fcc4f03629b90"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.485651 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.486661 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-754qf" event={"ID":"e25f6c60-5436-48b6-9f75-06182dbf2916","Type":"ContainerStarted","Data":"0fd5b362f89816886f1d7484d4c2dd82bea07625bf96f9d7246f0966c10ed2f2"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.487839 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" event={"ID":"516edc1f-8934-408f-a3f2-15e35f0de6bc","Type":"ContainerStarted","Data":"bf5c9358a71bb0dcaefe0e6c568d7856562a48d8eb8fe1e6c26b8f2141b4c831"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.488705 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerStarted","Data":"8cff5b266d14384e25e2bffb464c4d43c4975daba9ed0a8f778016541b924d8d"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.489527 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" event={"ID":"bab047ef-9486-43b9-adad-edaefe7952b9","Type":"ContainerStarted","Data":"2fff132913f57748f1f598a7dd8482323c5edbe1542618e9885d4cc53ba563b6"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.500333 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" event={"ID":"19e5f098-b188-4252-8e1f-8db1f38dbb75","Type":"ContainerStarted","Data":"41de8c85bf521060d11613a92699e597c5756879609a859da894926c18902865"} Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.501148 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.530883 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" podStartSLOduration=2.973666928 podStartE2EDuration="1m3.530867726s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.175161958 +0000 UTC m=+811.029048317" lastFinishedPulling="2026-01-31 07:50:39.732362756 +0000 UTC m=+871.586249115" observedRunningTime="2026-01-31 07:50:40.517034016 +0000 UTC m=+872.370920375" watchObservedRunningTime="2026-01-31 07:50:40.530867726 +0000 UTC m=+872.384754085" Jan 31 07:50:40 crc kubenswrapper[4826]: I0131 07:50:40.532957 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" podStartSLOduration=2.939851022 podStartE2EDuration="1m3.532951103s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.128850739 +0000 UTC m=+810.982737098" lastFinishedPulling="2026-01-31 07:50:39.7219508 +0000 UTC m=+871.575837179" observedRunningTime="2026-01-31 07:50:40.530519316 +0000 UTC m=+872.384405675" watchObservedRunningTime="2026-01-31 07:50:40.532951103 +0000 UTC m=+872.386837462" Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.513526 4826 generic.go:334] "Generic (PLEG): container finished" podID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerID="377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc" exitCode=0 Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.513645 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ngjq" event={"ID":"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73","Type":"ContainerDied","Data":"377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc"} Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.515763 4826 generic.go:334] "Generic (PLEG): container finished" podID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerID="bd824d85392286a94949ffb5c6abfde0e3fbed71aceb2332fd95791b20cd5134" exitCode=0 Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.515863 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-754qf" event={"ID":"e25f6c60-5436-48b6-9f75-06182dbf2916","Type":"ContainerDied","Data":"bd824d85392286a94949ffb5c6abfde0e3fbed71aceb2332fd95791b20cd5134"} Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.517355 4826 generic.go:334] "Generic (PLEG): container finished" podID="d735d132-571e-4619-9547-ede5670f6859" containerID="2ef0a26ce861d6f85f6dfa57fc0439af5a1a03288de7ce27c571fc23c397a2fc" exitCode=0 Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.517385 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerDied","Data":"2ef0a26ce861d6f85f6dfa57fc0439af5a1a03288de7ce27c571fc23c397a2fc"} Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.518896 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" event={"ID":"bab047ef-9486-43b9-adad-edaefe7952b9","Type":"ContainerStarted","Data":"acddb8610b149abca0b500d7d09d3c6a7ee358f275b1d47cc8a00e39c3aa3ede"} Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.519719 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.519757 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.583271 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" podStartSLOduration=4.002891262 podStartE2EDuration="1m4.583250605s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.156426854 +0000 UTC m=+811.010313213" lastFinishedPulling="2026-01-31 07:50:39.736786187 +0000 UTC m=+871.590672556" observedRunningTime="2026-01-31 07:50:41.582620568 +0000 UTC m=+873.436506937" watchObservedRunningTime="2026-01-31 07:50:41.583250605 +0000 UTC m=+873.437136974" Jan 31 07:50:41 crc kubenswrapper[4826]: I0131 07:50:41.625402 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" podStartSLOduration=64.625378832 podStartE2EDuration="1m4.625378832s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:50:41.619648754 +0000 UTC m=+873.473535113" watchObservedRunningTime="2026-01-31 07:50:41.625378832 +0000 UTC m=+873.479265191" Jan 31 07:50:43 crc kubenswrapper[4826]: I0131 07:50:43.534775 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" event={"ID":"999edaa2-f097-4789-a458-b309a42124a5","Type":"ContainerStarted","Data":"7da95ac2619d1913b45e1df29fd363b8dd53d6150289b8defe1c523df206fdc9"} Jan 31 07:50:43 crc kubenswrapper[4826]: I0131 07:50:43.536260 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" event={"ID":"8b2f20a7-2570-4a15-b86c-bdfdbd69c529","Type":"ContainerStarted","Data":"8d1cb2180aaca410d4a72a7f88a6ebebae5298da0c26a4f851941835964dad67"} Jan 31 07:50:43 crc kubenswrapper[4826]: I0131 07:50:43.538111 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" event={"ID":"fbd1ba04-6613-4a12-9009-088ebba2e643","Type":"ContainerStarted","Data":"a910de54de07b5ecad546cdfcca591fa3df24dbaa9a9176f260090818b67bf7b"} Jan 31 07:50:44 crc kubenswrapper[4826]: I0131 07:50:44.546023 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:50:44 crc kubenswrapper[4826]: I0131 07:50:44.570002 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" podStartSLOduration=3.897011498 podStartE2EDuration="1m7.569983354s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.129356923 +0000 UTC m=+810.983243282" lastFinishedPulling="2026-01-31 07:50:42.802328779 +0000 UTC m=+874.656215138" observedRunningTime="2026-01-31 07:50:44.566032935 +0000 UTC m=+876.419919284" watchObservedRunningTime="2026-01-31 07:50:44.569983354 +0000 UTC m=+876.423869723" Jan 31 07:50:44 crc kubenswrapper[4826]: I0131 07:50:44.596603 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" podStartSLOduration=3.922955138 podStartE2EDuration="1m7.596585214s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.126949685 +0000 UTC m=+810.980836044" lastFinishedPulling="2026-01-31 07:50:42.800579761 +0000 UTC m=+874.654466120" observedRunningTime="2026-01-31 07:50:44.590349523 +0000 UTC m=+876.444235882" watchObservedRunningTime="2026-01-31 07:50:44.596585214 +0000 UTC m=+876.450471573" Jan 31 07:50:44 crc kubenswrapper[4826]: I0131 07:50:44.608478 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" podStartSLOduration=3.905083966 podStartE2EDuration="1m7.60846264s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:49:39.09764564 +0000 UTC m=+810.951531999" lastFinishedPulling="2026-01-31 07:50:42.801024314 +0000 UTC m=+874.654910673" observedRunningTime="2026-01-31 07:50:44.605724665 +0000 UTC m=+876.459611034" watchObservedRunningTime="2026-01-31 07:50:44.60846264 +0000 UTC m=+876.462348989" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.448151 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bn475"] Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.450345 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.464654 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn475"] Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.548625 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-catalog-content\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.549191 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqfs\" (UniqueName: \"kubernetes.io/projected/86432747-dd49-4a23-b241-df73fa3dc3b7-kube-api-access-dpqfs\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.549328 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-utilities\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.651231 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqfs\" (UniqueName: \"kubernetes.io/projected/86432747-dd49-4a23-b241-df73fa3dc3b7-kube-api-access-dpqfs\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.651319 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-utilities\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.651438 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-catalog-content\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.652112 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-utilities\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.652148 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-catalog-content\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.682699 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqfs\" (UniqueName: \"kubernetes.io/projected/86432747-dd49-4a23-b241-df73fa3dc3b7-kube-api-access-dpqfs\") pod \"certified-operators-bn475\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:45 crc kubenswrapper[4826]: I0131 07:50:45.771046 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:47 crc kubenswrapper[4826]: I0131 07:50:47.087559 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn475"] Jan 31 07:50:47 crc kubenswrapper[4826]: I0131 07:50:47.580815 4826 generic.go:334] "Generic (PLEG): container finished" podID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerID="9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e" exitCode=0 Jan 31 07:50:47 crc kubenswrapper[4826]: I0131 07:50:47.581084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ngjq" event={"ID":"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73","Type":"ContainerDied","Data":"9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e"} Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.584192 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" event={"ID":"724a7dc5-6b24-44fa-a35a-4aea83f023c7","Type":"ContainerStarted","Data":"6554b904d7a2981fe3bd9d699c6f8daa879cf25c75ccaea53c511e53833a24d9"} Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.584799 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.588398 4826 generic.go:334] "Generic (PLEG): container finished" podID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerID="6f836318ae86c79f50facd8ff0648d1e6a3658fd03b4efb3b7bf2ccdc61e3611" exitCode=0 Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.588486 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-754qf" event={"ID":"e25f6c60-5436-48b6-9f75-06182dbf2916","Type":"ContainerDied","Data":"6f836318ae86c79f50facd8ff0648d1e6a3658fd03b4efb3b7bf2ccdc61e3611"} Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.591465 4826 generic.go:334] "Generic (PLEG): container finished" podID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerID="2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c" exitCode=0 Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.591520 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn475" event={"ID":"86432747-dd49-4a23-b241-df73fa3dc3b7","Type":"ContainerDied","Data":"2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c"} Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.591555 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn475" event={"ID":"86432747-dd49-4a23-b241-df73fa3dc3b7","Type":"ContainerStarted","Data":"411e50dfa5ceda85ac2f3a6be0662475fa9aa61d49cd701ce033a756d2902235"} Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.602463 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerStarted","Data":"0c38a165619885fa9d9f5af841266c4cf074bd37ac497e8cf2b7398c94f0928c"} Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.644494 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fbkx2" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.645311 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" podStartSLOduration=64.301134251 podStartE2EDuration="1m10.645296404s" podCreationTimestamp="2026-01-31 07:49:37 +0000 UTC" firstStartedPulling="2026-01-31 07:50:40.265658206 +0000 UTC m=+872.119544565" lastFinishedPulling="2026-01-31 07:50:46.609820339 +0000 UTC m=+878.463706718" observedRunningTime="2026-01-31 07:50:47.638005384 +0000 UTC m=+879.491891753" watchObservedRunningTime="2026-01-31 07:50:47.645296404 +0000 UTC m=+879.499182763" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.704616 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-k9b2p" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.752344 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.935562 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:47.973221 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-7w6rc" Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:48.611197 4826 generic.go:334] "Generic (PLEG): container finished" podID="d735d132-571e-4619-9547-ede5670f6859" containerID="0c38a165619885fa9d9f5af841266c4cf074bd37ac497e8cf2b7398c94f0928c" exitCode=0 Jan 31 07:50:48 crc kubenswrapper[4826]: I0131 07:50:48.612085 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerDied","Data":"0c38a165619885fa9d9f5af841266c4cf074bd37ac497e8cf2b7398c94f0928c"} Jan 31 07:50:49 crc kubenswrapper[4826]: I0131 07:50:49.621572 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ngjq" event={"ID":"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73","Type":"ContainerStarted","Data":"a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82"} Jan 31 07:50:49 crc kubenswrapper[4826]: I0131 07:50:49.643502 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4ngjq" podStartSLOduration=15.569885821 podStartE2EDuration="21.643472046s" podCreationTimestamp="2026-01-31 07:50:28 +0000 UTC" firstStartedPulling="2026-01-31 07:50:42.716315989 +0000 UTC m=+874.570202388" lastFinishedPulling="2026-01-31 07:50:48.789902254 +0000 UTC m=+880.643788613" observedRunningTime="2026-01-31 07:50:49.638131459 +0000 UTC m=+881.492017818" watchObservedRunningTime="2026-01-31 07:50:49.643472046 +0000 UTC m=+881.497358405" Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.160338 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-656b74655f-lrz2z" Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.629501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-754qf" event={"ID":"e25f6c60-5436-48b6-9f75-06182dbf2916","Type":"ContainerStarted","Data":"62aa017099afd58085c1386a0e4a2c60ca997c12a01c29c3df84a4b6dd73a91b"} Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.632665 4826 generic.go:334] "Generic (PLEG): container finished" podID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerID="0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876" exitCode=0 Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.632751 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn475" event={"ID":"86432747-dd49-4a23-b241-df73fa3dc3b7","Type":"ContainerDied","Data":"0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876"} Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.638702 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerStarted","Data":"4adae8783407fb33b5449fb3827bc51fa2679334ca819ae1e3f87a3c9ebeb9c1"} Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.706161 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-754qf" podStartSLOduration=18.575440463 podStartE2EDuration="25.706140847s" podCreationTimestamp="2026-01-31 07:50:25 +0000 UTC" firstStartedPulling="2026-01-31 07:50:42.714949422 +0000 UTC m=+874.568835801" lastFinishedPulling="2026-01-31 07:50:49.845649826 +0000 UTC m=+881.699536185" observedRunningTime="2026-01-31 07:50:50.674311053 +0000 UTC m=+882.528197422" watchObservedRunningTime="2026-01-31 07:50:50.706140847 +0000 UTC m=+882.560027206" Jan 31 07:50:50 crc kubenswrapper[4826]: I0131 07:50:50.707818 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6rfl" podStartSLOduration=12.41068287 podStartE2EDuration="19.707805942s" podCreationTimestamp="2026-01-31 07:50:31 +0000 UTC" firstStartedPulling="2026-01-31 07:50:42.715012503 +0000 UTC m=+874.568898902" lastFinishedPulling="2026-01-31 07:50:50.012135615 +0000 UTC m=+881.866021974" observedRunningTime="2026-01-31 07:50:50.70408147 +0000 UTC m=+882.557967839" watchObservedRunningTime="2026-01-31 07:50:50.707805942 +0000 UTC m=+882.561692301" Jan 31 07:50:51 crc kubenswrapper[4826]: I0131 07:50:51.508082 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:51 crc kubenswrapper[4826]: I0131 07:50:51.508654 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:50:51 crc kubenswrapper[4826]: I0131 07:50:51.649915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn475" event={"ID":"86432747-dd49-4a23-b241-df73fa3dc3b7","Type":"ContainerStarted","Data":"526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01"} Jan 31 07:50:51 crc kubenswrapper[4826]: I0131 07:50:51.671946 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bn475" podStartSLOduration=3.197568435 podStartE2EDuration="6.671926159s" podCreationTimestamp="2026-01-31 07:50:45 +0000 UTC" firstStartedPulling="2026-01-31 07:50:47.593027689 +0000 UTC m=+879.446914048" lastFinishedPulling="2026-01-31 07:50:51.067385413 +0000 UTC m=+882.921271772" observedRunningTime="2026-01-31 07:50:51.669565134 +0000 UTC m=+883.523451493" watchObservedRunningTime="2026-01-31 07:50:51.671926159 +0000 UTC m=+883.525812528" Jan 31 07:50:52 crc kubenswrapper[4826]: I0131 07:50:52.596267 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p6rfl" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="registry-server" probeResult="failure" output=< Jan 31 07:50:52 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 07:50:52 crc kubenswrapper[4826]: > Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.519619 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.520005 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.562171 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.746169 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.772751 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.772809 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:55 crc kubenswrapper[4826]: I0131 07:50:55.811797 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:56 crc kubenswrapper[4826]: I0131 07:50:56.747811 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:57 crc kubenswrapper[4826]: I0131 07:50:57.643377 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn475"] Jan 31 07:50:57 crc kubenswrapper[4826]: I0131 07:50:57.756390 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x226q" Jan 31 07:50:57 crc kubenswrapper[4826]: I0131 07:50:57.837269 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-754qf"] Jan 31 07:50:57 crc kubenswrapper[4826]: I0131 07:50:57.837721 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-754qf" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="registry-server" containerID="cri-o://62aa017099afd58085c1386a0e4a2c60ca997c12a01c29c3df84a4b6dd73a91b" gracePeriod=2 Jan 31 07:50:57 crc kubenswrapper[4826]: I0131 07:50:57.938495 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p2jb4" Jan 31 07:50:57 crc kubenswrapper[4826]: I0131 07:50:57.999179 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gt2wd" Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.515350 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.515403 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.555041 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.714908 4826 generic.go:334] "Generic (PLEG): container finished" podID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerID="62aa017099afd58085c1386a0e4a2c60ca997c12a01c29c3df84a4b6dd73a91b" exitCode=0 Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.714955 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-754qf" event={"ID":"e25f6c60-5436-48b6-9f75-06182dbf2916","Type":"ContainerDied","Data":"62aa017099afd58085c1386a0e4a2c60ca997c12a01c29c3df84a4b6dd73a91b"} Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.715177 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bn475" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="registry-server" containerID="cri-o://526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01" gracePeriod=2 Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.763960 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:50:58 crc kubenswrapper[4826]: I0131 07:50:58.922643 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.071627 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-utilities\") pod \"e25f6c60-5436-48b6-9f75-06182dbf2916\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.071694 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-catalog-content\") pod \"e25f6c60-5436-48b6-9f75-06182dbf2916\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.071723 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdxk\" (UniqueName: \"kubernetes.io/projected/e25f6c60-5436-48b6-9f75-06182dbf2916-kube-api-access-ngdxk\") pod \"e25f6c60-5436-48b6-9f75-06182dbf2916\" (UID: \"e25f6c60-5436-48b6-9f75-06182dbf2916\") " Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.072369 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-utilities" (OuterVolumeSpecName: "utilities") pod "e25f6c60-5436-48b6-9f75-06182dbf2916" (UID: "e25f6c60-5436-48b6-9f75-06182dbf2916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.082487 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25f6c60-5436-48b6-9f75-06182dbf2916-kube-api-access-ngdxk" (OuterVolumeSpecName: "kube-api-access-ngdxk") pod "e25f6c60-5436-48b6-9f75-06182dbf2916" (UID: "e25f6c60-5436-48b6-9f75-06182dbf2916"). InnerVolumeSpecName "kube-api-access-ngdxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.106363 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e25f6c60-5436-48b6-9f75-06182dbf2916" (UID: "e25f6c60-5436-48b6-9f75-06182dbf2916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.173765 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.173798 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdxk\" (UniqueName: \"kubernetes.io/projected/e25f6c60-5436-48b6-9f75-06182dbf2916-kube-api-access-ngdxk\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.173828 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25f6c60-5436-48b6-9f75-06182dbf2916-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.704856 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.727131 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-754qf" event={"ID":"e25f6c60-5436-48b6-9f75-06182dbf2916","Type":"ContainerDied","Data":"0fd5b362f89816886f1d7484d4c2dd82bea07625bf96f9d7246f0966c10ed2f2"} Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.727196 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-754qf" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.727204 4826 scope.go:117] "RemoveContainer" containerID="62aa017099afd58085c1386a0e4a2c60ca997c12a01c29c3df84a4b6dd73a91b" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.739489 4826 generic.go:334] "Generic (PLEG): container finished" podID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerID="526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01" exitCode=0 Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.739571 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn475" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.739574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn475" event={"ID":"86432747-dd49-4a23-b241-df73fa3dc3b7","Type":"ContainerDied","Data":"526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01"} Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.739621 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn475" event={"ID":"86432747-dd49-4a23-b241-df73fa3dc3b7","Type":"ContainerDied","Data":"411e50dfa5ceda85ac2f3a6be0662475fa9aa61d49cd701ce033a756d2902235"} Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.759337 4826 scope.go:117] "RemoveContainer" containerID="6f836318ae86c79f50facd8ff0648d1e6a3658fd03b4efb3b7bf2ccdc61e3611" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.767020 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-754qf"] Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.773562 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-754qf"] Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.782127 4826 scope.go:117] "RemoveContainer" containerID="bd824d85392286a94949ffb5c6abfde0e3fbed71aceb2332fd95791b20cd5134" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.821941 4826 scope.go:117] "RemoveContainer" containerID="526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.837056 4826 scope.go:117] "RemoveContainer" containerID="0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.865154 4826 scope.go:117] "RemoveContainer" containerID="2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.877630 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.882881 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-catalog-content\") pod \"86432747-dd49-4a23-b241-df73fa3dc3b7\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.883295 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpqfs\" (UniqueName: \"kubernetes.io/projected/86432747-dd49-4a23-b241-df73fa3dc3b7-kube-api-access-dpqfs\") pod \"86432747-dd49-4a23-b241-df73fa3dc3b7\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.883464 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-utilities\") pod \"86432747-dd49-4a23-b241-df73fa3dc3b7\" (UID: \"86432747-dd49-4a23-b241-df73fa3dc3b7\") " Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.884637 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-utilities" (OuterVolumeSpecName: "utilities") pod "86432747-dd49-4a23-b241-df73fa3dc3b7" (UID: "86432747-dd49-4a23-b241-df73fa3dc3b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.885836 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.886616 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86432747-dd49-4a23-b241-df73fa3dc3b7-kube-api-access-dpqfs" (OuterVolumeSpecName: "kube-api-access-dpqfs") pod "86432747-dd49-4a23-b241-df73fa3dc3b7" (UID: "86432747-dd49-4a23-b241-df73fa3dc3b7"). InnerVolumeSpecName "kube-api-access-dpqfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.889774 4826 scope.go:117] "RemoveContainer" containerID="526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01" Jan 31 07:50:59 crc kubenswrapper[4826]: E0131 07:50:59.890156 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01\": container with ID starting with 526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01 not found: ID does not exist" containerID="526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.890183 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01"} err="failed to get container status \"526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01\": rpc error: code = NotFound desc = could not find container \"526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01\": container with ID starting with 526879807c88e311aa1928f7ea125d9b654a15e3d34117fcbff1eef9da409a01 not found: ID does not exist" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.890203 4826 scope.go:117] "RemoveContainer" containerID="0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876" Jan 31 07:50:59 crc kubenswrapper[4826]: E0131 07:50:59.890397 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876\": container with ID starting with 0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876 not found: ID does not exist" containerID="0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.890417 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876"} err="failed to get container status \"0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876\": rpc error: code = NotFound desc = could not find container \"0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876\": container with ID starting with 0a62c2652ad2b91db713650c1939d283a4cbb5a94de323b7ac0dddb3d8f8f876 not found: ID does not exist" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.890429 4826 scope.go:117] "RemoveContainer" containerID="2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c" Jan 31 07:50:59 crc kubenswrapper[4826]: E0131 07:50:59.892246 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c\": container with ID starting with 2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c not found: ID does not exist" containerID="2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.892280 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c"} err="failed to get container status \"2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c\": rpc error: code = NotFound desc = could not find container \"2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c\": container with ID starting with 2ed9a0aaa4191c59134f887c0490894b4a4998462b70b72414ef9aa0f527312c not found: ID does not exist" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.938181 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86432747-dd49-4a23-b241-df73fa3dc3b7" (UID: "86432747-dd49-4a23-b241-df73fa3dc3b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.987055 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86432747-dd49-4a23-b241-df73fa3dc3b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:50:59 crc kubenswrapper[4826]: I0131 07:50:59.987094 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpqfs\" (UniqueName: \"kubernetes.io/projected/86432747-dd49-4a23-b241-df73fa3dc3b7-kube-api-access-dpqfs\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:00 crc kubenswrapper[4826]: I0131 07:51:00.067240 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn475"] Jan 31 07:51:00 crc kubenswrapper[4826]: I0131 07:51:00.073376 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bn475"] Jan 31 07:51:00 crc kubenswrapper[4826]: I0131 07:51:00.821642 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" path="/var/lib/kubelet/pods/86432747-dd49-4a23-b241-df73fa3dc3b7/volumes" Jan 31 07:51:00 crc kubenswrapper[4826]: I0131 07:51:00.822903 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" path="/var/lib/kubelet/pods/e25f6c60-5436-48b6-9f75-06182dbf2916/volumes" Jan 31 07:51:01 crc kubenswrapper[4826]: I0131 07:51:01.549674 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:51:01 crc kubenswrapper[4826]: I0131 07:51:01.595490 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.238306 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4ngjq"] Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.238582 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4ngjq" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="registry-server" containerID="cri-o://a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82" gracePeriod=2 Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.594817 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.723524 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-catalog-content\") pod \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.723588 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h88p6\" (UniqueName: \"kubernetes.io/projected/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-kube-api-access-h88p6\") pod \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.723658 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-utilities\") pod \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\" (UID: \"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73\") " Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.725565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-utilities" (OuterVolumeSpecName: "utilities") pod "9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" (UID: "9a12baae-b5d3-443a-8f2a-60ab3c2c3d73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.730492 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-kube-api-access-h88p6" (OuterVolumeSpecName: "kube-api-access-h88p6") pod "9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" (UID: "9a12baae-b5d3-443a-8f2a-60ab3c2c3d73"). InnerVolumeSpecName "kube-api-access-h88p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.762941 4826 generic.go:334] "Generic (PLEG): container finished" podID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerID="a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82" exitCode=0 Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.763004 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ngjq" event={"ID":"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73","Type":"ContainerDied","Data":"a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82"} Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.763050 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ngjq" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.763067 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ngjq" event={"ID":"9a12baae-b5d3-443a-8f2a-60ab3c2c3d73","Type":"ContainerDied","Data":"ebc7000abdcbe2068eba4679a7f6ffbbbf0306555d28abc7d9ebc4606f0231e3"} Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.763086 4826 scope.go:117] "RemoveContainer" containerID="a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.779451 4826 scope.go:117] "RemoveContainer" containerID="9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.799615 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" (UID: "9a12baae-b5d3-443a-8f2a-60ab3c2c3d73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.803256 4826 scope.go:117] "RemoveContainer" containerID="377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.825313 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.825357 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h88p6\" (UniqueName: \"kubernetes.io/projected/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-kube-api-access-h88p6\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.825370 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.828060 4826 scope.go:117] "RemoveContainer" containerID="a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82" Jan 31 07:51:02 crc kubenswrapper[4826]: E0131 07:51:02.828617 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82\": container with ID starting with a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82 not found: ID does not exist" containerID="a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.828657 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82"} err="failed to get container status \"a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82\": rpc error: code = NotFound desc = could not find container \"a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82\": container with ID starting with a20b9d0e74fb8a37c3da3e64463541f91ce5295439cce9183933f51d4cdcad82 not found: ID does not exist" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.828687 4826 scope.go:117] "RemoveContainer" containerID="9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e" Jan 31 07:51:02 crc kubenswrapper[4826]: E0131 07:51:02.828956 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e\": container with ID starting with 9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e not found: ID does not exist" containerID="9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.829076 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e"} err="failed to get container status \"9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e\": rpc error: code = NotFound desc = could not find container \"9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e\": container with ID starting with 9abc4ea93a3d10aa3ff61a0886d2c43299608c2bac321119003a712f791ed32e not found: ID does not exist" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.829100 4826 scope.go:117] "RemoveContainer" containerID="377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc" Jan 31 07:51:02 crc kubenswrapper[4826]: E0131 07:51:02.829338 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc\": container with ID starting with 377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc not found: ID does not exist" containerID="377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc" Jan 31 07:51:02 crc kubenswrapper[4826]: I0131 07:51:02.829366 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc"} err="failed to get container status \"377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc\": rpc error: code = NotFound desc = could not find container \"377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc\": container with ID starting with 377266a867018bae900a34098083d58a9bfbac51c11061da9d14a7bac20c86bc not found: ID does not exist" Jan 31 07:51:03 crc kubenswrapper[4826]: I0131 07:51:03.083544 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4ngjq"] Jan 31 07:51:03 crc kubenswrapper[4826]: I0131 07:51:03.087921 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4ngjq"] Jan 31 07:51:04 crc kubenswrapper[4826]: I0131 07:51:04.637145 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6rfl"] Jan 31 07:51:04 crc kubenswrapper[4826]: I0131 07:51:04.637353 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p6rfl" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="registry-server" containerID="cri-o://4adae8783407fb33b5449fb3827bc51fa2679334ca819ae1e3f87a3c9ebeb9c1" gracePeriod=2 Jan 31 07:51:04 crc kubenswrapper[4826]: I0131 07:51:04.779857 4826 generic.go:334] "Generic (PLEG): container finished" podID="d735d132-571e-4619-9547-ede5670f6859" containerID="4adae8783407fb33b5449fb3827bc51fa2679334ca819ae1e3f87a3c9ebeb9c1" exitCode=0 Jan 31 07:51:04 crc kubenswrapper[4826]: I0131 07:51:04.779904 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerDied","Data":"4adae8783407fb33b5449fb3827bc51fa2679334ca819ae1e3f87a3c9ebeb9c1"} Jan 31 07:51:04 crc kubenswrapper[4826]: I0131 07:51:04.817091 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" path="/var/lib/kubelet/pods/9a12baae-b5d3-443a-8f2a-60ab3c2c3d73/volumes" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.006186 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.161348 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-catalog-content\") pod \"d735d132-571e-4619-9547-ede5670f6859\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.165185 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-utilities\") pod \"d735d132-571e-4619-9547-ede5670f6859\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.165244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24zs\" (UniqueName: \"kubernetes.io/projected/d735d132-571e-4619-9547-ede5670f6859-kube-api-access-v24zs\") pod \"d735d132-571e-4619-9547-ede5670f6859\" (UID: \"d735d132-571e-4619-9547-ede5670f6859\") " Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.166331 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-utilities" (OuterVolumeSpecName: "utilities") pod "d735d132-571e-4619-9547-ede5670f6859" (UID: "d735d132-571e-4619-9547-ede5670f6859"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.174825 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d735d132-571e-4619-9547-ede5670f6859-kube-api-access-v24zs" (OuterVolumeSpecName: "kube-api-access-v24zs") pod "d735d132-571e-4619-9547-ede5670f6859" (UID: "d735d132-571e-4619-9547-ede5670f6859"). InnerVolumeSpecName "kube-api-access-v24zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.267765 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.267802 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24zs\" (UniqueName: \"kubernetes.io/projected/d735d132-571e-4619-9547-ede5670f6859-kube-api-access-v24zs\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.310332 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d735d132-571e-4619-9547-ede5670f6859" (UID: "d735d132-571e-4619-9547-ede5670f6859"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.385115 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d735d132-571e-4619-9547-ede5670f6859-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.789527 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rfl" event={"ID":"d735d132-571e-4619-9547-ede5670f6859","Type":"ContainerDied","Data":"8cff5b266d14384e25e2bffb464c4d43c4975daba9ed0a8f778016541b924d8d"} Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.789611 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6rfl" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.789619 4826 scope.go:117] "RemoveContainer" containerID="4adae8783407fb33b5449fb3827bc51fa2679334ca819ae1e3f87a3c9ebeb9c1" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.808525 4826 scope.go:117] "RemoveContainer" containerID="0c38a165619885fa9d9f5af841266c4cf074bd37ac497e8cf2b7398c94f0928c" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.854145 4826 scope.go:117] "RemoveContainer" containerID="2ef0a26ce861d6f85f6dfa57fc0439af5a1a03288de7ce27c571fc23c397a2fc" Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.867924 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6rfl"] Jan 31 07:51:05 crc kubenswrapper[4826]: I0131 07:51:05.877831 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p6rfl"] Jan 31 07:51:06 crc kubenswrapper[4826]: I0131 07:51:06.828305 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d735d132-571e-4619-9547-ede5670f6859" path="/var/lib/kubelet/pods/d735d132-571e-4619-9547-ede5670f6859/volumes" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.276267 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7ln7p"] Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277160 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277181 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277212 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277224 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277240 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277248 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277259 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277265 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277277 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277283 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277293 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277299 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277314 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277325 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="extract-content" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277338 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277346 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277360 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277368 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277376 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277384 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="extract-utilities" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277397 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277403 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: E0131 07:51:16.277413 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277420 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277599 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d735d132-571e-4619-9547-ede5670f6859" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277613 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a12baae-b5d3-443a-8f2a-60ab3c2c3d73" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277624 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="86432747-dd49-4a23-b241-df73fa3dc3b7" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.277634 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25f6c60-5436-48b6-9f75-06182dbf2916" containerName="registry-server" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.278451 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.280545 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7tlnd" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.280778 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.282847 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.283490 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.295782 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7ln7p"] Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.385582 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kltf5"] Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.386857 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.389009 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.400712 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kltf5"] Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.455877 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200d00e-7b7d-488a-9838-ba4a0d0572a8-config\") pod \"dnsmasq-dns-675f4bcbfc-7ln7p\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.456119 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6p2h\" (UniqueName: \"kubernetes.io/projected/6200d00e-7b7d-488a-9838-ba4a0d0572a8-kube-api-access-s6p2h\") pod \"dnsmasq-dns-675f4bcbfc-7ln7p\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.557358 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.557514 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlt6x\" (UniqueName: \"kubernetes.io/projected/7068488c-c069-41d9-b48d-a5305147962a-kube-api-access-vlt6x\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.557563 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200d00e-7b7d-488a-9838-ba4a0d0572a8-config\") pod \"dnsmasq-dns-675f4bcbfc-7ln7p\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.557595 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6p2h\" (UniqueName: \"kubernetes.io/projected/6200d00e-7b7d-488a-9838-ba4a0d0572a8-kube-api-access-s6p2h\") pod \"dnsmasq-dns-675f4bcbfc-7ln7p\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.557667 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-config\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.558616 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200d00e-7b7d-488a-9838-ba4a0d0572a8-config\") pod \"dnsmasq-dns-675f4bcbfc-7ln7p\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.593408 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6p2h\" (UniqueName: \"kubernetes.io/projected/6200d00e-7b7d-488a-9838-ba4a0d0572a8-kube-api-access-s6p2h\") pod \"dnsmasq-dns-675f4bcbfc-7ln7p\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.596937 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.659268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlt6x\" (UniqueName: \"kubernetes.io/projected/7068488c-c069-41d9-b48d-a5305147962a-kube-api-access-vlt6x\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.659374 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-config\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.659418 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.660763 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-config\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.661071 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.682244 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlt6x\" (UniqueName: \"kubernetes.io/projected/7068488c-c069-41d9-b48d-a5305147962a-kube-api-access-vlt6x\") pod \"dnsmasq-dns-78dd6ddcc-kltf5\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:16 crc kubenswrapper[4826]: I0131 07:51:16.702839 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:17 crc kubenswrapper[4826]: I0131 07:51:17.022625 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7ln7p"] Jan 31 07:51:17 crc kubenswrapper[4826]: I0131 07:51:17.141621 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kltf5"] Jan 31 07:51:17 crc kubenswrapper[4826]: W0131 07:51:17.143943 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7068488c_c069_41d9_b48d_a5305147962a.slice/crio-9f152f9ea8a640c50449869d01233262a824c84165907d84a2a6c5ccc1047922 WatchSource:0}: Error finding container 9f152f9ea8a640c50449869d01233262a824c84165907d84a2a6c5ccc1047922: Status 404 returned error can't find the container with id 9f152f9ea8a640c50449869d01233262a824c84165907d84a2a6c5ccc1047922 Jan 31 07:51:17 crc kubenswrapper[4826]: I0131 07:51:17.873421 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" event={"ID":"7068488c-c069-41d9-b48d-a5305147962a","Type":"ContainerStarted","Data":"9f152f9ea8a640c50449869d01233262a824c84165907d84a2a6c5ccc1047922"} Jan 31 07:51:17 crc kubenswrapper[4826]: I0131 07:51:17.876773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" event={"ID":"6200d00e-7b7d-488a-9838-ba4a0d0572a8","Type":"ContainerStarted","Data":"82534316e955eab97032fb2f9869cd6010e057440cb5b8fcb1435febf60faaaf"} Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.072094 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7ln7p"] Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.097580 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwcdv"] Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.100895 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.103472 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvvd\" (UniqueName: \"kubernetes.io/projected/41df3d47-ec36-4229-a1d7-2e9259527105-kube-api-access-8mvvd\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.103546 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-config\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.103590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.121565 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwcdv"] Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.204561 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvvd\" (UniqueName: \"kubernetes.io/projected/41df3d47-ec36-4229-a1d7-2e9259527105-kube-api-access-8mvvd\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.204884 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-config\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.204925 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.205763 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.206470 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-config\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.236760 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvvd\" (UniqueName: \"kubernetes.io/projected/41df3d47-ec36-4229-a1d7-2e9259527105-kube-api-access-8mvvd\") pod \"dnsmasq-dns-666b6646f7-nwcdv\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.387379 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kltf5"] Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.405409 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b86lm"] Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.408654 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.431491 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b86lm"] Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.432693 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.508861 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.509011 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-config\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.509051 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx665\" (UniqueName: \"kubernetes.io/projected/273b67d0-ba01-4033-93b4-2ca614b573d5-kube-api-access-hx665\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.609861 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-config\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.609938 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx665\" (UniqueName: \"kubernetes.io/projected/273b67d0-ba01-4033-93b4-2ca614b573d5-kube-api-access-hx665\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.610031 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.611077 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.611310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-config\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.650839 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx665\" (UniqueName: \"kubernetes.io/projected/273b67d0-ba01-4033-93b4-2ca614b573d5-kube-api-access-hx665\") pod \"dnsmasq-dns-57d769cc4f-b86lm\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:19 crc kubenswrapper[4826]: I0131 07:51:19.745403 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.150194 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwcdv"] Jan 31 07:51:20 crc kubenswrapper[4826]: W0131 07:51:20.157916 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41df3d47_ec36_4229_a1d7_2e9259527105.slice/crio-6cb303bb59d721581e2c2c6340c192e0fb9366d7d85873bf1644ee6e2d47e3f4 WatchSource:0}: Error finding container 6cb303bb59d721581e2c2c6340c192e0fb9366d7d85873bf1644ee6e2d47e3f4: Status 404 returned error can't find the container with id 6cb303bb59d721581e2c2c6340c192e0fb9366d7d85873bf1644ee6e2d47e3f4 Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.266023 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.268640 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.272291 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.272889 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.273134 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.273296 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.273676 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x5w4g" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.273863 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.276433 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.277046 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.330420 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b86lm"] Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.451327 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgr64\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-kube-api-access-tgr64\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.451383 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/548eef53-f0eb-46fd-a66d-12825c7c8f67-pod-info\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.451796 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-server-conf\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.451847 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.451868 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.451953 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.452008 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-config-data\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.452072 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.452147 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.452174 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/548eef53-f0eb-46fd-a66d-12825c7c8f67-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.452193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.554186 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555539 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555619 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/548eef53-f0eb-46fd-a66d-12825c7c8f67-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555671 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555712 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgr64\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-kube-api-access-tgr64\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555782 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/548eef53-f0eb-46fd-a66d-12825c7c8f67-pod-info\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-server-conf\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555880 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.555945 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.556020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.556051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-config-data\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.556871 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.557088 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.557465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-config-data\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.557694 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.562116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-server-conf\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.563861 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.577117 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgr64\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-kube-api-access-tgr64\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.584202 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/548eef53-f0eb-46fd-a66d-12825c7c8f67-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.585354 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/548eef53-f0eb-46fd-a66d-12825c7c8f67-pod-info\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.585984 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.608898 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.616864 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.658889 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.669207 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.669509 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.669577 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.669654 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4xrbw" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.670589 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.670719 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.670842 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.686338 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.767478 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf30cab9-089e-40db-ab76-5416de684a26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.767520 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.767556 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.767598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdqk\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-kube-api-access-prdqk\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.767616 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.768528 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.768578 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.768629 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.768729 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.768958 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf30cab9-089e-40db-ab76-5416de684a26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.769054 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.871027 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.870486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.871158 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf30cab9-089e-40db-ab76-5416de684a26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.871213 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.872594 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf30cab9-089e-40db-ab76-5416de684a26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.872642 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.872730 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.872829 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdqk\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-kube-api-access-prdqk\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.872857 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.872959 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.873057 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.873172 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.873192 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.874001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.874579 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.874892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.877389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.881768 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf30cab9-089e-40db-ab76-5416de684a26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.882377 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.884392 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf30cab9-089e-40db-ab76-5416de684a26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.895458 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdqk\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-kube-api-access-prdqk\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.903477 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.903595 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.903652 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.931255 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" event={"ID":"41df3d47-ec36-4229-a1d7-2e9259527105","Type":"ContainerStarted","Data":"6cb303bb59d721581e2c2c6340c192e0fb9366d7d85873bf1644ee6e2d47e3f4"} Jan 31 07:51:20 crc kubenswrapper[4826]: I0131 07:51:20.932632 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" event={"ID":"273b67d0-ba01-4033-93b4-2ca614b573d5","Type":"ContainerStarted","Data":"1aeec336b26559d722f5fa36e268ccc19951d3f43af89ce58214c287ad0adffe"} Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.025102 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.423434 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.595215 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.800218 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.801612 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.804556 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.804615 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.804729 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.808239 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q4bzb" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.812798 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.813000 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.889682 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-config-data-default\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.889743 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.889786 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2hc\" (UniqueName: \"kubernetes.io/projected/9c715176-281c-43a4-8e08-7b86520d08da-kube-api-access-hv2hc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.889857 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c715176-281c-43a4-8e08-7b86520d08da-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.889887 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c715176-281c-43a4-8e08-7b86520d08da-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.889931 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-kolla-config\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.890023 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c715176-281c-43a4-8e08-7b86520d08da-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.891689 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.960939 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf30cab9-089e-40db-ab76-5416de684a26","Type":"ContainerStarted","Data":"98f6b371aa9e7501c91c7f0e6dba0609da167ea5e8b0a94a31a27a1393571482"} Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.962296 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"548eef53-f0eb-46fd-a66d-12825c7c8f67","Type":"ContainerStarted","Data":"0df0bda87563d49d99eb4a2ffc3d81007a6134833d30307c57f65c64357f1dbc"} Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997482 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997530 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-config-data-default\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997550 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997583 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2hc\" (UniqueName: \"kubernetes.io/projected/9c715176-281c-43a4-8e08-7b86520d08da-kube-api-access-hv2hc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c715176-281c-43a4-8e08-7b86520d08da-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997646 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c715176-281c-43a4-8e08-7b86520d08da-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-kolla-config\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.997716 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c715176-281c-43a4-8e08-7b86520d08da-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.998067 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 31 07:51:21 crc kubenswrapper[4826]: I0131 07:51:21.999219 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-kolla-config\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:21.999996 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.000534 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c715176-281c-43a4-8e08-7b86520d08da-config-data-default\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.003534 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c715176-281c-43a4-8e08-7b86520d08da-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.009078 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c715176-281c-43a4-8e08-7b86520d08da-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.009186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c715176-281c-43a4-8e08-7b86520d08da-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.019245 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2hc\" (UniqueName: \"kubernetes.io/projected/9c715176-281c-43a4-8e08-7b86520d08da-kube-api-access-hv2hc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.023142 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"9c715176-281c-43a4-8e08-7b86520d08da\") " pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.131027 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 07:51:22 crc kubenswrapper[4826]: I0131 07:51:22.649988 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.181873 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.184238 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.193211 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.193545 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.193826 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.195033 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tp7t8" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.198004 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.214500 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.214743 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c533aa15-c4f0-4198-91fc-fd6e4536091d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.215005 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c533aa15-c4f0-4198-91fc-fd6e4536091d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.215099 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533aa15-c4f0-4198-91fc-fd6e4536091d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.215169 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.215212 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.215307 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.215396 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqtrx\" (UniqueName: \"kubernetes.io/projected/c533aa15-c4f0-4198-91fc-fd6e4536091d-kube-api-access-vqtrx\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318466 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c533aa15-c4f0-4198-91fc-fd6e4536091d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318508 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c533aa15-c4f0-4198-91fc-fd6e4536091d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318536 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533aa15-c4f0-4198-91fc-fd6e4536091d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318573 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318601 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318626 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.318668 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqtrx\" (UniqueName: \"kubernetes.io/projected/c533aa15-c4f0-4198-91fc-fd6e4536091d-kube-api-access-vqtrx\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.319518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c533aa15-c4f0-4198-91fc-fd6e4536091d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.319710 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.320284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.321106 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.325466 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c533aa15-c4f0-4198-91fc-fd6e4536091d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.333616 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c533aa15-c4f0-4198-91fc-fd6e4536091d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.334090 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533aa15-c4f0-4198-91fc-fd6e4536091d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.342345 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqtrx\" (UniqueName: \"kubernetes.io/projected/c533aa15-c4f0-4198-91fc-fd6e4536091d-kube-api-access-vqtrx\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.342444 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.343660 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.346341 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.346357 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.366425 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-728jm" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.366917 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.377407 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c533aa15-c4f0-4198-91fc-fd6e4536091d\") " pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.510317 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.526423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-config-data\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.526493 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.526563 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-kolla-config\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.526607 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.526649 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfqwm\" (UniqueName: \"kubernetes.io/projected/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-kube-api-access-lfqwm\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.630031 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.630090 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfqwm\" (UniqueName: \"kubernetes.io/projected/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-kube-api-access-lfqwm\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.630132 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-config-data\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.630160 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.630211 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-kolla-config\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.631324 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-kolla-config\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.632510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-config-data\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.634413 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.636459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.646589 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfqwm\" (UniqueName: \"kubernetes.io/projected/bca5de5c-45fd-4f33-89ab-7c2f3296a8be-kube-api-access-lfqwm\") pod \"memcached-0\" (UID: \"bca5de5c-45fd-4f33-89ab-7c2f3296a8be\") " pod="openstack/memcached-0" Jan 31 07:51:23 crc kubenswrapper[4826]: I0131 07:51:23.746940 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.427246 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.428458 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.434220 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.434580 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6v676" Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.466673 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkms\" (UniqueName: \"kubernetes.io/projected/d75dafb0-9882-4d3a-8c85-7be1dc197924-kube-api-access-sdkms\") pod \"kube-state-metrics-0\" (UID: \"d75dafb0-9882-4d3a-8c85-7be1dc197924\") " pod="openstack/kube-state-metrics-0" Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.568780 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdkms\" (UniqueName: \"kubernetes.io/projected/d75dafb0-9882-4d3a-8c85-7be1dc197924-kube-api-access-sdkms\") pod \"kube-state-metrics-0\" (UID: \"d75dafb0-9882-4d3a-8c85-7be1dc197924\") " pod="openstack/kube-state-metrics-0" Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.586801 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdkms\" (UniqueName: \"kubernetes.io/projected/d75dafb0-9882-4d3a-8c85-7be1dc197924-kube-api-access-sdkms\") pod \"kube-state-metrics-0\" (UID: \"d75dafb0-9882-4d3a-8c85-7be1dc197924\") " pod="openstack/kube-state-metrics-0" Jan 31 07:51:25 crc kubenswrapper[4826]: I0131 07:51:25.753598 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:51:26 crc kubenswrapper[4826]: W0131 07:51:26.377119 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c715176_281c_43a4_8e08_7b86520d08da.slice/crio-e7436729deec04883bcd647d7085d028d20b55a22496581073baeec9f7bd073d WatchSource:0}: Error finding container e7436729deec04883bcd647d7085d028d20b55a22496581073baeec9f7bd073d: Status 404 returned error can't find the container with id e7436729deec04883bcd647d7085d028d20b55a22496581073baeec9f7bd073d Jan 31 07:51:27 crc kubenswrapper[4826]: I0131 07:51:27.013699 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c715176-281c-43a4-8e08-7b86520d08da","Type":"ContainerStarted","Data":"e7436729deec04883bcd647d7085d028d20b55a22496581073baeec9f7bd073d"} Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.452766 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.455233 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.459404 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gnclt" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.465952 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.466876 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.467170 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.467755 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.474879 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.487155 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p6hcp"] Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.488435 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.496919 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.497405 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.497926 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jxtq6" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.530754 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p6hcp"] Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.537388 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c589c873-9e78-4905-ab37-d49329e9c84f-combined-ca-bundle\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.537474 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvjt\" (UniqueName: \"kubernetes.io/projected/c589c873-9e78-4905-ab37-d49329e9c84f-kube-api-access-ddvjt\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.537523 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-run\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.537736 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c589c873-9e78-4905-ab37-d49329e9c84f-ovn-controller-tls-certs\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.537905 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c589c873-9e78-4905-ab37-d49329e9c84f-scripts\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.537940 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-log-ovn\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.538103 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-run-ovn\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.569647 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5wdmn"] Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.584715 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.590944 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5wdmn"] Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.640470 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.640578 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c589c873-9e78-4905-ab37-d49329e9c84f-combined-ca-bundle\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.641719 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvjt\" (UniqueName: \"kubernetes.io/projected/c589c873-9e78-4905-ab37-d49329e9c84f-kube-api-access-ddvjt\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.641748 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-run\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.641790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.641812 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72pxc\" (UniqueName: \"kubernetes.io/projected/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-kube-api-access-72pxc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.641845 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642061 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c589c873-9e78-4905-ab37-d49329e9c84f-ovn-controller-tls-certs\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642137 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c589c873-9e78-4905-ab37-d49329e9c84f-scripts\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642163 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-log-ovn\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642241 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-run-ovn\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642267 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-run\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642336 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642352 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-config\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642386 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642445 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-log-ovn\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.642547 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c589c873-9e78-4905-ab37-d49329e9c84f-var-run-ovn\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.644921 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c589c873-9e78-4905-ab37-d49329e9c84f-scripts\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.646102 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c589c873-9e78-4905-ab37-d49329e9c84f-ovn-controller-tls-certs\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.651336 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c589c873-9e78-4905-ab37-d49329e9c84f-combined-ca-bundle\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.656140 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvjt\" (UniqueName: \"kubernetes.io/projected/c589c873-9e78-4905-ab37-d49329e9c84f-kube-api-access-ddvjt\") pod \"ovn-controller-p6hcp\" (UID: \"c589c873-9e78-4905-ab37-d49329e9c84f\") " pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743460 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-scripts\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-run\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743536 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743555 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-config\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743697 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743767 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2lm\" (UniqueName: \"kubernetes.io/projected/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-kube-api-access-7l2lm\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743930 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.743956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72pxc\" (UniqueName: \"kubernetes.io/projected/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-kube-api-access-72pxc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744042 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744068 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-etc-ovs\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744091 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-log\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744199 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-lib\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744391 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-config\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.744808 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.748400 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.748558 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.749079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.750554 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.751009 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.765772 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72pxc\" (UniqueName: \"kubernetes.io/projected/89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6-kube-api-access-72pxc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.766423 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6\") " pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.806378 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-lib\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846206 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-scripts\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846231 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-run\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846276 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2lm\" (UniqueName: \"kubernetes.io/projected/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-kube-api-access-7l2lm\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846343 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-etc-ovs\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846448 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-run\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-lib\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846618 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-etc-ovs\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846691 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-log\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846723 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.846960 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-var-log\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.848989 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-scripts\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.866517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2lm\" (UniqueName: \"kubernetes.io/projected/f568bc2f-bf3c-463a-9af8-d98de17ac7b6-kube-api-access-7l2lm\") pod \"ovn-controller-ovs-5wdmn\" (UID: \"f568bc2f-bf3c-463a-9af8-d98de17ac7b6\") " pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:29 crc kubenswrapper[4826]: I0131 07:51:29.916902 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:31 crc kubenswrapper[4826]: I0131 07:51:31.897906 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 07:51:31 crc kubenswrapper[4826]: I0131 07:51:31.899310 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:31 crc kubenswrapper[4826]: W0131 07:51:31.901763 4826 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p8zzx": failed to list *v1.Secret: secrets "ovncluster-ovndbcluster-sb-dockercfg-p8zzx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 31 07:51:31 crc kubenswrapper[4826]: E0131 07:51:31.901820 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-p8zzx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovncluster-ovndbcluster-sb-dockercfg-p8zzx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:51:31 crc kubenswrapper[4826]: W0131 07:51:31.902158 4826 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: configmaps "ovndbcluster-sb-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 31 07:51:31 crc kubenswrapper[4826]: E0131 07:51:31.902195 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovndbcluster-sb-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:51:31 crc kubenswrapper[4826]: W0131 07:51:31.902163 4826 reflector.go:561] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": failed to list *v1.Secret: secrets "cert-ovndbcluster-sb-ovndbs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 31 07:51:31 crc kubenswrapper[4826]: E0131 07:51:31.902229 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-sb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ovndbcluster-sb-ovndbs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 07:51:31 crc kubenswrapper[4826]: I0131 07:51:31.902307 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 07:51:31 crc kubenswrapper[4826]: I0131 07:51:31.934586 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.086838 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.086899 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplzv\" (UniqueName: \"kubernetes.io/projected/532f5068-1ff9-449a-b8ed-80986499afb5-kube-api-access-xplzv\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.086979 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.087027 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.087056 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.087097 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/532f5068-1ff9-449a-b8ed-80986499afb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.087126 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.087152 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.188634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.188731 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.188787 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.188833 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/532f5068-1ff9-449a-b8ed-80986499afb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.188896 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.188944 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.189027 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.189087 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplzv\" (UniqueName: \"kubernetes.io/projected/532f5068-1ff9-449a-b8ed-80986499afb5-kube-api-access-xplzv\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.189465 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.190902 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/532f5068-1ff9-449a-b8ed-80986499afb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.194830 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.199360 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.200686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.203835 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplzv\" (UniqueName: \"kubernetes.io/projected/532f5068-1ff9-449a-b8ed-80986499afb5-kube-api-access-xplzv\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.216097 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.861286 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.870636 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p8zzx" Jan 31 07:51:32 crc kubenswrapper[4826]: I0131 07:51:32.876370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/532f5068-1ff9-449a-b8ed-80986499afb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:33 crc kubenswrapper[4826]: E0131 07:51:33.190346 4826 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-scripts: failed to sync configmap cache: timed out waiting for the condition Jan 31 07:51:33 crc kubenswrapper[4826]: E0131 07:51:33.190454 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-scripts podName:532f5068-1ff9-449a-b8ed-80986499afb5 nodeName:}" failed. No retries permitted until 2026-01-31 07:51:33.690433821 +0000 UTC m=+925.544320180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-scripts") pod "ovsdbserver-sb-0" (UID: "532f5068-1ff9-449a-b8ed-80986499afb5") : failed to sync configmap cache: timed out waiting for the condition Jan 31 07:51:33 crc kubenswrapper[4826]: I0131 07:51:33.320190 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 07:51:33 crc kubenswrapper[4826]: I0131 07:51:33.715784 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:33 crc kubenswrapper[4826]: I0131 07:51:33.717437 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532f5068-1ff9-449a-b8ed-80986499afb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"532f5068-1ff9-449a-b8ed-80986499afb5\") " pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:34 crc kubenswrapper[4826]: I0131 07:51:34.015875 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.103623 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.104524 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6p2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7ln7p_openstack(6200d00e-7b7d-488a-9838-ba4a0d0572a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.106053 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" podUID="6200d00e-7b7d-488a-9838-ba4a0d0572a8" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.119754 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.119923 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx665,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-b86lm_openstack(273b67d0-ba01-4033-93b4-2ca614b573d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.121936 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" podUID="273b67d0-ba01-4033-93b4-2ca614b573d5" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.134142 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.134320 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-nwcdv_openstack(41df3d47-ec36-4229-a1d7-2e9259527105): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.135493 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" podUID="41df3d47-ec36-4229-a1d7-2e9259527105" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.143230 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.143360 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlt6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-kltf5_openstack(7068488c-c069-41d9-b48d-a5305147962a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:51:38 crc kubenswrapper[4826]: E0131 07:51:38.144655 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" podUID="7068488c-c069-41d9-b48d-a5305147962a" Jan 31 07:51:38 crc kubenswrapper[4826]: I0131 07:51:38.536935 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 07:51:39 crc kubenswrapper[4826]: E0131 07:51:39.108308 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" podUID="273b67d0-ba01-4033-93b4-2ca614b573d5" Jan 31 07:51:39 crc kubenswrapper[4826]: E0131 07:51:39.108306 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" podUID="41df3d47-ec36-4229-a1d7-2e9259527105" Jan 31 07:51:40 crc kubenswrapper[4826]: W0131 07:51:40.560185 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc533aa15_c4f0_4198_91fc_fd6e4536091d.slice/crio-3396a7eece5cf5c2a48f444544ceb6ace85e459dd8ff790775d6f8fb28912452 WatchSource:0}: Error finding container 3396a7eece5cf5c2a48f444544ceb6ace85e459dd8ff790775d6f8fb28912452: Status 404 returned error can't find the container with id 3396a7eece5cf5c2a48f444544ceb6ace85e459dd8ff790775d6f8fb28912452 Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.746951 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.827465 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.860902 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-config\") pod \"7068488c-c069-41d9-b48d-a5305147962a\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.861019 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-dns-svc\") pod \"7068488c-c069-41d9-b48d-a5305147962a\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.861073 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlt6x\" (UniqueName: \"kubernetes.io/projected/7068488c-c069-41d9-b48d-a5305147962a-kube-api-access-vlt6x\") pod \"7068488c-c069-41d9-b48d-a5305147962a\" (UID: \"7068488c-c069-41d9-b48d-a5305147962a\") " Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.861589 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7068488c-c069-41d9-b48d-a5305147962a" (UID: "7068488c-c069-41d9-b48d-a5305147962a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.861609 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-config" (OuterVolumeSpecName: "config") pod "7068488c-c069-41d9-b48d-a5305147962a" (UID: "7068488c-c069-41d9-b48d-a5305147962a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.864466 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7068488c-c069-41d9-b48d-a5305147962a-kube-api-access-vlt6x" (OuterVolumeSpecName: "kube-api-access-vlt6x") pod "7068488c-c069-41d9-b48d-a5305147962a" (UID: "7068488c-c069-41d9-b48d-a5305147962a"). InnerVolumeSpecName "kube-api-access-vlt6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.962285 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200d00e-7b7d-488a-9838-ba4a0d0572a8-config\") pod \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.963010 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6200d00e-7b7d-488a-9838-ba4a0d0572a8-config" (OuterVolumeSpecName: "config") pod "6200d00e-7b7d-488a-9838-ba4a0d0572a8" (UID: "6200d00e-7b7d-488a-9838-ba4a0d0572a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.963028 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6p2h\" (UniqueName: \"kubernetes.io/projected/6200d00e-7b7d-488a-9838-ba4a0d0572a8-kube-api-access-s6p2h\") pod \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\" (UID: \"6200d00e-7b7d-488a-9838-ba4a0d0572a8\") " Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.966378 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.966427 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200d00e-7b7d-488a-9838-ba4a0d0572a8-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.966450 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7068488c-c069-41d9-b48d-a5305147962a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.966470 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlt6x\" (UniqueName: \"kubernetes.io/projected/7068488c-c069-41d9-b48d-a5305147962a-kube-api-access-vlt6x\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:40 crc kubenswrapper[4826]: I0131 07:51:40.966919 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6200d00e-7b7d-488a-9838-ba4a0d0572a8-kube-api-access-s6p2h" (OuterVolumeSpecName: "kube-api-access-s6p2h") pod "6200d00e-7b7d-488a-9838-ba4a0d0572a8" (UID: "6200d00e-7b7d-488a-9838-ba4a0d0572a8"). InnerVolumeSpecName "kube-api-access-s6p2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.068240 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6p2h\" (UniqueName: \"kubernetes.io/projected/6200d00e-7b7d-488a-9838-ba4a0d0572a8-kube-api-access-s6p2h\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.101629 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 07:51:41 crc kubenswrapper[4826]: W0131 07:51:41.110726 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca5de5c_45fd_4f33_89ab_7c2f3296a8be.slice/crio-27a00d33665b5312c57bc063ce8c7992d5f0d5b25535bdfa6176994fac903f45 WatchSource:0}: Error finding container 27a00d33665b5312c57bc063ce8c7992d5f0d5b25535bdfa6176994fac903f45: Status 404 returned error can't find the container with id 27a00d33665b5312c57bc063ce8c7992d5f0d5b25535bdfa6176994fac903f45 Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.111984 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p6hcp"] Jan 31 07:51:41 crc kubenswrapper[4826]: W0131 07:51:41.136237 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc589c873_9e78_4905_ab37_d49329e9c84f.slice/crio-cb7bc20f31fe05789bd2675b25e4d56886b20ed67571cc48396365193622f966 WatchSource:0}: Error finding container cb7bc20f31fe05789bd2675b25e4d56886b20ed67571cc48396365193622f966: Status 404 returned error can't find the container with id cb7bc20f31fe05789bd2675b25e4d56886b20ed67571cc48396365193622f966 Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.150582 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c533aa15-c4f0-4198-91fc-fd6e4536091d","Type":"ContainerStarted","Data":"3396a7eece5cf5c2a48f444544ceb6ace85e459dd8ff790775d6f8fb28912452"} Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.154911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c715176-281c-43a4-8e08-7b86520d08da","Type":"ContainerStarted","Data":"7f194195bc9ae043130fd4e292080399d318c1582009c723c4ec36d932a40f45"} Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.159161 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bca5de5c-45fd-4f33-89ab-7c2f3296a8be","Type":"ContainerStarted","Data":"27a00d33665b5312c57bc063ce8c7992d5f0d5b25535bdfa6176994fac903f45"} Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.160921 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" event={"ID":"7068488c-c069-41d9-b48d-a5305147962a","Type":"ContainerDied","Data":"9f152f9ea8a640c50449869d01233262a824c84165907d84a2a6c5ccc1047922"} Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.161023 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kltf5" Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.166434 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" event={"ID":"6200d00e-7b7d-488a-9838-ba4a0d0572a8","Type":"ContainerDied","Data":"82534316e955eab97032fb2f9869cd6010e057440cb5b8fcb1435febf60faaaf"} Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.166502 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7ln7p" Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.230741 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 07:51:41 crc kubenswrapper[4826]: W0131 07:51:41.257675 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532f5068_1ff9_449a_b8ed_80986499afb5.slice/crio-e32359ac2b3ff2a8da32ce095d773614c7a07d0a18abbcda998599a98897dedc WatchSource:0}: Error finding container e32359ac2b3ff2a8da32ce095d773614c7a07d0a18abbcda998599a98897dedc: Status 404 returned error can't find the container with id e32359ac2b3ff2a8da32ce095d773614c7a07d0a18abbcda998599a98897dedc Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.263307 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kltf5"] Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.278398 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kltf5"] Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.302843 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7ln7p"] Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.324631 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7ln7p"] Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.335495 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:51:41 crc kubenswrapper[4826]: E0131 07:51:41.393634 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7068488c_c069_41d9_b48d_a5305147962a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6200d00e_7b7d_488a_9838_ba4a0d0572a8.slice/crio-82534316e955eab97032fb2f9869cd6010e057440cb5b8fcb1435febf60faaaf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6200d00e_7b7d_488a_9838_ba4a0d0572a8.slice\": RecentStats: unable to find data in memory cache]" Jan 31 07:51:41 crc kubenswrapper[4826]: I0131 07:51:41.416552 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5wdmn"] Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.176430 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp" event={"ID":"c589c873-9e78-4905-ab37-d49329e9c84f","Type":"ContainerStarted","Data":"cb7bc20f31fe05789bd2675b25e4d56886b20ed67571cc48396365193622f966"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.180028 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf30cab9-089e-40db-ab76-5416de684a26","Type":"ContainerStarted","Data":"dcfb3d648c3567e21a9f889b5b49f845d02222028e044f8df7cc9166df845e27"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.185261 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"532f5068-1ff9-449a-b8ed-80986499afb5","Type":"ContainerStarted","Data":"e32359ac2b3ff2a8da32ce095d773614c7a07d0a18abbcda998599a98897dedc"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.188621 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"548eef53-f0eb-46fd-a66d-12825c7c8f67","Type":"ContainerStarted","Data":"1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.191035 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5wdmn" event={"ID":"f568bc2f-bf3c-463a-9af8-d98de17ac7b6","Type":"ContainerStarted","Data":"4e3382997a59419f427b3a31a2ba319f6732c70530a8bdd9a2b57a7ea80afd85"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.193999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c533aa15-c4f0-4198-91fc-fd6e4536091d","Type":"ContainerStarted","Data":"6511fd9abcab1c5c661d981afe25f6bd00d8c59b59f756790bd254ca75f2face"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.196188 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d75dafb0-9882-4d3a-8c85-7be1dc197924","Type":"ContainerStarted","Data":"7ef7481e187185c3f2009120d4ed4a16080a04c95f5b4f3aa7dd68d0fbdcd33b"} Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.414161 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.819326 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6200d00e-7b7d-488a-9838-ba4a0d0572a8" path="/var/lib/kubelet/pods/6200d00e-7b7d-488a-9838-ba4a0d0572a8/volumes" Jan 31 07:51:42 crc kubenswrapper[4826]: I0131 07:51:42.819943 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7068488c-c069-41d9-b48d-a5305147962a" path="/var/lib/kubelet/pods/7068488c-c069-41d9-b48d-a5305147962a/volumes" Jan 31 07:51:43 crc kubenswrapper[4826]: W0131 07:51:43.460157 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89bea4e4_2b41_40b2_b9f3_52a8f2f1fdb6.slice/crio-cc193f31aebe247fd384dfe8ea70a62a7028075e71b225ba74b4fd90bb7201a7 WatchSource:0}: Error finding container cc193f31aebe247fd384dfe8ea70a62a7028075e71b225ba74b4fd90bb7201a7: Status 404 returned error can't find the container with id cc193f31aebe247fd384dfe8ea70a62a7028075e71b225ba74b4fd90bb7201a7 Jan 31 07:51:44 crc kubenswrapper[4826]: I0131 07:51:44.217325 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6","Type":"ContainerStarted","Data":"cc193f31aebe247fd384dfe8ea70a62a7028075e71b225ba74b4fd90bb7201a7"} Jan 31 07:51:45 crc kubenswrapper[4826]: I0131 07:51:45.229525 4826 generic.go:334] "Generic (PLEG): container finished" podID="c533aa15-c4f0-4198-91fc-fd6e4536091d" containerID="6511fd9abcab1c5c661d981afe25f6bd00d8c59b59f756790bd254ca75f2face" exitCode=0 Jan 31 07:51:45 crc kubenswrapper[4826]: I0131 07:51:45.229634 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c533aa15-c4f0-4198-91fc-fd6e4536091d","Type":"ContainerDied","Data":"6511fd9abcab1c5c661d981afe25f6bd00d8c59b59f756790bd254ca75f2face"} Jan 31 07:51:45 crc kubenswrapper[4826]: I0131 07:51:45.234093 4826 generic.go:334] "Generic (PLEG): container finished" podID="9c715176-281c-43a4-8e08-7b86520d08da" containerID="7f194195bc9ae043130fd4e292080399d318c1582009c723c4ec36d932a40f45" exitCode=0 Jan 31 07:51:45 crc kubenswrapper[4826]: I0131 07:51:45.234141 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c715176-281c-43a4-8e08-7b86520d08da","Type":"ContainerDied","Data":"7f194195bc9ae043130fd4e292080399d318c1582009c723c4ec36d932a40f45"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.251566 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp" event={"ID":"c589c873-9e78-4905-ab37-d49329e9c84f","Type":"ContainerStarted","Data":"ba2e04b5b1c60aebbe8b6f6f58088accb443200ba35ae83ae47847081f2760f8"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.252526 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-p6hcp" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.258166 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6","Type":"ContainerStarted","Data":"a117ecb6199266319a8b3990bc48738203792ac0489f6f63eae1aa6f8047d1b6"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.261271 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"532f5068-1ff9-449a-b8ed-80986499afb5","Type":"ContainerStarted","Data":"634aadc1796e749b26c168e35bf08b453592ed66f087291a4eb12edfdbd84001"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.263778 4826 generic.go:334] "Generic (PLEG): container finished" podID="f568bc2f-bf3c-463a-9af8-d98de17ac7b6" containerID="32ee348ceedd0f30ab18c2f0fa67ab8390fdcb07074eb3181a0f10a7d24b61ea" exitCode=0 Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.263874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5wdmn" event={"ID":"f568bc2f-bf3c-463a-9af8-d98de17ac7b6","Type":"ContainerDied","Data":"32ee348ceedd0f30ab18c2f0fa67ab8390fdcb07074eb3181a0f10a7d24b61ea"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.266823 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c533aa15-c4f0-4198-91fc-fd6e4536091d","Type":"ContainerStarted","Data":"d16a4563d5b6f31ad4826c81394e42cf7a592811e483ff8dcc235f9b2b9800c8"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.269914 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9c715176-281c-43a4-8e08-7b86520d08da","Type":"ContainerStarted","Data":"9c37b88cd490d867d20c391b5170a7378884476270f5b432c3fa2daea85f08f5"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.280417 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-p6hcp" podStartSLOduration=13.483193815 podStartE2EDuration="18.280397124s" podCreationTimestamp="2026-01-31 07:51:29 +0000 UTC" firstStartedPulling="2026-01-31 07:51:41.150490332 +0000 UTC m=+933.004376681" lastFinishedPulling="2026-01-31 07:51:45.947693631 +0000 UTC m=+937.801579990" observedRunningTime="2026-01-31 07:51:47.275537451 +0000 UTC m=+939.129423810" watchObservedRunningTime="2026-01-31 07:51:47.280397124 +0000 UTC m=+939.134283473" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.283868 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d75dafb0-9882-4d3a-8c85-7be1dc197924","Type":"ContainerStarted","Data":"83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.285060 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.288003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bca5de5c-45fd-4f33-89ab-7c2f3296a8be","Type":"ContainerStarted","Data":"22908177aa784a83b72d3e11a7e2c89198d49a93875152717290f981ccbc7dfc"} Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.288784 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.333628 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.98638705 podStartE2EDuration="27.333612935s" podCreationTimestamp="2026-01-31 07:51:20 +0000 UTC" firstStartedPulling="2026-01-31 07:51:26.380571464 +0000 UTC m=+918.234457823" lastFinishedPulling="2026-01-31 07:51:40.727797339 +0000 UTC m=+932.581683708" observedRunningTime="2026-01-31 07:51:47.3276015 +0000 UTC m=+939.181487859" watchObservedRunningTime="2026-01-31 07:51:47.333612935 +0000 UTC m=+939.187499294" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.355799 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.934262333 podStartE2EDuration="25.355778224s" podCreationTimestamp="2026-01-31 07:51:22 +0000 UTC" firstStartedPulling="2026-01-31 07:51:40.601828351 +0000 UTC m=+932.455714710" lastFinishedPulling="2026-01-31 07:51:41.023344242 +0000 UTC m=+932.877230601" observedRunningTime="2026-01-31 07:51:47.35564532 +0000 UTC m=+939.209531679" watchObservedRunningTime="2026-01-31 07:51:47.355778224 +0000 UTC m=+939.209664593" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.407184 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.011022435 podStartE2EDuration="24.407160504s" podCreationTimestamp="2026-01-31 07:51:23 +0000 UTC" firstStartedPulling="2026-01-31 07:51:41.114761211 +0000 UTC m=+932.968647580" lastFinishedPulling="2026-01-31 07:51:45.51089926 +0000 UTC m=+937.364785649" observedRunningTime="2026-01-31 07:51:47.384336047 +0000 UTC m=+939.238222406" watchObservedRunningTime="2026-01-31 07:51:47.407160504 +0000 UTC m=+939.261046863" Jan 31 07:51:47 crc kubenswrapper[4826]: I0131 07:51:47.408188 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.281781436 podStartE2EDuration="22.408179192s" podCreationTimestamp="2026-01-31 07:51:25 +0000 UTC" firstStartedPulling="2026-01-31 07:51:41.335509501 +0000 UTC m=+933.189395860" lastFinishedPulling="2026-01-31 07:51:46.461907257 +0000 UTC m=+938.315793616" observedRunningTime="2026-01-31 07:51:47.399801772 +0000 UTC m=+939.253688141" watchObservedRunningTime="2026-01-31 07:51:47.408179192 +0000 UTC m=+939.262065551" Jan 31 07:51:48 crc kubenswrapper[4826]: E0131 07:51:48.066876 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:55208->38.102.83.13:46067: write tcp 38.102.83.13:55208->38.102.83.13:46067: write: broken pipe Jan 31 07:51:48 crc kubenswrapper[4826]: I0131 07:51:48.306034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5wdmn" event={"ID":"f568bc2f-bf3c-463a-9af8-d98de17ac7b6","Type":"ContainerStarted","Data":"1a5095ab163bdd574a28e4dbe9411e797cf777da2b5c7ef2b73b507214139681"} Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.315315 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5wdmn" event={"ID":"f568bc2f-bf3c-463a-9af8-d98de17ac7b6","Type":"ContainerStarted","Data":"28c7e2a825895ade45e31360339c6fb051f9e49c131a240dd3f3168a7adbf442"} Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.315998 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.316026 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.317143 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6","Type":"ContainerStarted","Data":"e4926fb45fa8f86cd9a5c8f5ee5d559c7a763a9d24af568a42819c28eca11655"} Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.326039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"532f5068-1ff9-449a-b8ed-80986499afb5","Type":"ContainerStarted","Data":"c880fb2f92a2724c7a47e3e8c1d361cb289e2bf9794c3cb736735225bd54713b"} Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.350950 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5wdmn" podStartSLOduration=16.186901597 podStartE2EDuration="20.350921233s" podCreationTimestamp="2026-01-31 07:51:29 +0000 UTC" firstStartedPulling="2026-01-31 07:51:41.454093327 +0000 UTC m=+933.307979706" lastFinishedPulling="2026-01-31 07:51:45.618112943 +0000 UTC m=+937.471999342" observedRunningTime="2026-01-31 07:51:49.338049469 +0000 UTC m=+941.191935838" watchObservedRunningTime="2026-01-31 07:51:49.350921233 +0000 UTC m=+941.204807602" Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.367907 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.403166992 podStartE2EDuration="21.367882538s" podCreationTimestamp="2026-01-31 07:51:28 +0000 UTC" firstStartedPulling="2026-01-31 07:51:43.462803328 +0000 UTC m=+935.316689687" lastFinishedPulling="2026-01-31 07:51:48.427518874 +0000 UTC m=+940.281405233" observedRunningTime="2026-01-31 07:51:49.360591938 +0000 UTC m=+941.214478297" watchObservedRunningTime="2026-01-31 07:51:49.367882538 +0000 UTC m=+941.221768927" Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.382422 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.208601278 podStartE2EDuration="19.382402157s" podCreationTimestamp="2026-01-31 07:51:30 +0000 UTC" firstStartedPulling="2026-01-31 07:51:41.260983475 +0000 UTC m=+933.114869834" lastFinishedPulling="2026-01-31 07:51:48.434784344 +0000 UTC m=+940.288670713" observedRunningTime="2026-01-31 07:51:49.379662602 +0000 UTC m=+941.233548971" watchObservedRunningTime="2026-01-31 07:51:49.382402157 +0000 UTC m=+941.236288516" Jan 31 07:51:49 crc kubenswrapper[4826]: I0131 07:51:49.807156 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:50 crc kubenswrapper[4826]: I0131 07:51:50.807062 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:50 crc kubenswrapper[4826]: I0131 07:51:50.847955 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.398060 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.710524 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b86lm"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.720843 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9rw8k"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.721986 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.724022 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.737387 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9rw8k"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.768606 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-7msrb"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.770211 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.772507 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.786085 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-7msrb"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843685 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d141a506-fc38-4e03-9923-31895d8c3f34-ovn-rundir\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843742 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843770 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqf5\" (UniqueName: \"kubernetes.io/projected/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-kube-api-access-ksqf5\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843791 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-config\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843805 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843824 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlnr\" (UniqueName: \"kubernetes.io/projected/d141a506-fc38-4e03-9923-31895d8c3f34-kube-api-access-zxlnr\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843842 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d141a506-fc38-4e03-9923-31895d8c3f34-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843864 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d141a506-fc38-4e03-9923-31895d8c3f34-config\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843890 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d141a506-fc38-4e03-9923-31895d8c3f34-combined-ca-bundle\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.843935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d141a506-fc38-4e03-9923-31895d8c3f34-ovs-rundir\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.944276 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwcdv"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945034 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d141a506-fc38-4e03-9923-31895d8c3f34-ovn-rundir\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945074 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945103 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqf5\" (UniqueName: \"kubernetes.io/projected/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-kube-api-access-ksqf5\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945121 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-config\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945153 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxlnr\" (UniqueName: \"kubernetes.io/projected/d141a506-fc38-4e03-9923-31895d8c3f34-kube-api-access-zxlnr\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945170 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d141a506-fc38-4e03-9923-31895d8c3f34-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d141a506-fc38-4e03-9923-31895d8c3f34-config\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945231 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d141a506-fc38-4e03-9923-31895d8c3f34-combined-ca-bundle\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d141a506-fc38-4e03-9923-31895d8c3f34-ovs-rundir\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945324 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d141a506-fc38-4e03-9923-31895d8c3f34-ovn-rundir\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.945357 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d141a506-fc38-4e03-9923-31895d8c3f34-ovs-rundir\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.947060 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.947139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d141a506-fc38-4e03-9923-31895d8c3f34-config\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.947686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.948102 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-config\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.950923 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d141a506-fc38-4e03-9923-31895d8c3f34-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.962232 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d141a506-fc38-4e03-9923-31895d8c3f34-combined-ca-bundle\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.964436 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxlnr\" (UniqueName: \"kubernetes.io/projected/d141a506-fc38-4e03-9923-31895d8c3f34-kube-api-access-zxlnr\") pod \"ovn-controller-metrics-9rw8k\" (UID: \"d141a506-fc38-4e03-9923-31895d8c3f34\") " pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.983524 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-48nx2"] Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.984953 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.993804 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqf5\" (UniqueName: \"kubernetes.io/projected/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-kube-api-access-ksqf5\") pod \"dnsmasq-dns-5bf47b49b7-7msrb\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:51 crc kubenswrapper[4826]: I0131 07:51:51.993987 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.007784 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-48nx2"] Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.016939 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.043718 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9rw8k" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.045905 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mww4x\" (UniqueName: \"kubernetes.io/projected/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-kube-api-access-mww4x\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.045943 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-config\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.046004 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.046051 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-dns-svc\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.046105 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.115803 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.132386 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.132425 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.132639 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.147411 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mww4x\" (UniqueName: \"kubernetes.io/projected/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-kube-api-access-mww4x\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.147468 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-config\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.147537 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.147598 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-dns-svc\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.147682 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.150659 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-config\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.151467 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.152477 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-dns-svc\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.153579 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.196183 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mww4x\" (UniqueName: \"kubernetes.io/projected/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-kube-api-access-mww4x\") pod \"dnsmasq-dns-8554648995-48nx2\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.273151 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.350683 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-dns-svc\") pod \"273b67d0-ba01-4033-93b4-2ca614b573d5\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.350730 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx665\" (UniqueName: \"kubernetes.io/projected/273b67d0-ba01-4033-93b4-2ca614b573d5-kube-api-access-hx665\") pod \"273b67d0-ba01-4033-93b4-2ca614b573d5\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.350873 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-config\") pod \"273b67d0-ba01-4033-93b4-2ca614b573d5\" (UID: \"273b67d0-ba01-4033-93b4-2ca614b573d5\") " Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.351882 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-config" (OuterVolumeSpecName: "config") pod "273b67d0-ba01-4033-93b4-2ca614b573d5" (UID: "273b67d0-ba01-4033-93b4-2ca614b573d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.353063 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "273b67d0-ba01-4033-93b4-2ca614b573d5" (UID: "273b67d0-ba01-4033-93b4-2ca614b573d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.371867 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273b67d0-ba01-4033-93b4-2ca614b573d5-kube-api-access-hx665" (OuterVolumeSpecName: "kube-api-access-hx665") pod "273b67d0-ba01-4033-93b4-2ca614b573d5" (UID: "273b67d0-ba01-4033-93b4-2ca614b573d5"). InnerVolumeSpecName "kube-api-access-hx665". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.403412 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.422695 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" event={"ID":"273b67d0-ba01-4033-93b4-2ca614b573d5","Type":"ContainerDied","Data":"1aeec336b26559d722f5fa36e268ccc19951d3f43af89ce58214c287ad0adffe"} Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.422914 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b86lm" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.424097 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.463742 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.463775 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx665\" (UniqueName: \"kubernetes.io/projected/273b67d0-ba01-4033-93b4-2ca614b573d5-kube-api-access-hx665\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.463789 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/273b67d0-ba01-4033-93b4-2ca614b573d5-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.476900 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.480198 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.508219 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b86lm"] Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.509440 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.513613 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b86lm"] Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.607709 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.669140 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mvvd\" (UniqueName: \"kubernetes.io/projected/41df3d47-ec36-4229-a1d7-2e9259527105-kube-api-access-8mvvd\") pod \"41df3d47-ec36-4229-a1d7-2e9259527105\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.669927 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-dns-svc\") pod \"41df3d47-ec36-4229-a1d7-2e9259527105\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.670115 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-config\") pod \"41df3d47-ec36-4229-a1d7-2e9259527105\" (UID: \"41df3d47-ec36-4229-a1d7-2e9259527105\") " Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.670481 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41df3d47-ec36-4229-a1d7-2e9259527105" (UID: "41df3d47-ec36-4229-a1d7-2e9259527105"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.670543 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-config" (OuterVolumeSpecName: "config") pod "41df3d47-ec36-4229-a1d7-2e9259527105" (UID: "41df3d47-ec36-4229-a1d7-2e9259527105"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.672874 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.673142 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41df3d47-ec36-4229-a1d7-2e9259527105-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.675383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41df3d47-ec36-4229-a1d7-2e9259527105-kube-api-access-8mvvd" (OuterVolumeSpecName: "kube-api-access-8mvvd") pod "41df3d47-ec36-4229-a1d7-2e9259527105" (UID: "41df3d47-ec36-4229-a1d7-2e9259527105"). InnerVolumeSpecName "kube-api-access-8mvvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.774998 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mvvd\" (UniqueName: \"kubernetes.io/projected/41df3d47-ec36-4229-a1d7-2e9259527105-kube-api-access-8mvvd\") on node \"crc\" DevicePath \"\"" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.799777 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9rw8k"] Jan 31 07:51:52 crc kubenswrapper[4826]: W0131 07:51:52.803289 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd141a506_fc38_4e03_9923_31895d8c3f34.slice/crio-b4a0cf7fb012b192692710c19822b5ab452b0542171398301835ea671faff80a WatchSource:0}: Error finding container b4a0cf7fb012b192692710c19822b5ab452b0542171398301835ea671faff80a: Status 404 returned error can't find the container with id b4a0cf7fb012b192692710c19822b5ab452b0542171398301835ea671faff80a Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.830384 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273b67d0-ba01-4033-93b4-2ca614b573d5" path="/var/lib/kubelet/pods/273b67d0-ba01-4033-93b4-2ca614b573d5/volumes" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.854376 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.864296 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.880813 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-config\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.880882 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.880907 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2j4b\" (UniqueName: \"kubernetes.io/projected/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-kube-api-access-j2j4b\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.880934 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.880980 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-scripts\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.881024 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.881072 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.915415 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.921868 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zwnqr" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.922358 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.922547 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.922813 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.956666 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-7msrb"] Jan 31 07:51:52 crc kubenswrapper[4826]: W0131 07:51:52.965074 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd8eba63_06f5_46aa_abfa_d0ec1b11725f.slice/crio-692f7115e0c9c74fa67669574e035655add89f9467e67dc8283e289730f17ba3 WatchSource:0}: Error finding container 692f7115e0c9c74fa67669574e035655add89f9467e67dc8283e289730f17ba3: Status 404 returned error can't find the container with id 692f7115e0c9c74fa67669574e035655add89f9467e67dc8283e289730f17ba3 Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982270 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-config\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982698 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982732 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2j4b\" (UniqueName: \"kubernetes.io/projected/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-kube-api-access-j2j4b\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982763 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982800 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-scripts\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.982891 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.983376 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.983780 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-config\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.984261 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-scripts\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.986367 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.986905 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:52 crc kubenswrapper[4826]: I0131 07:51:52.986981 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.005649 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2j4b\" (UniqueName: \"kubernetes.io/projected/17b9480e-d5e0-4478-9f5c-85caf6bb8f0a-kube-api-access-j2j4b\") pod \"ovn-northd-0\" (UID: \"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a\") " pod="openstack/ovn-northd-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.043744 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-48nx2"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.194995 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.370615 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2a1e-account-create-update-s98bk"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.375539 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.378512 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.379050 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9lhm4"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.380029 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.392745 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9lhm4"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.401831 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2a1e-account-create-update-s98bk"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.453841 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" event={"ID":"dd8eba63-06f5-46aa-abfa-d0ec1b11725f","Type":"ContainerStarted","Data":"692f7115e0c9c74fa67669574e035655add89f9467e67dc8283e289730f17ba3"} Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.456241 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9rw8k" event={"ID":"d141a506-fc38-4e03-9923-31895d8c3f34","Type":"ContainerStarted","Data":"f94e31d69cf45b608ba76b8dd61ba2ecfbae45ea611501a4647a39a9598f159a"} Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.456292 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9rw8k" event={"ID":"d141a506-fc38-4e03-9923-31895d8c3f34","Type":"ContainerStarted","Data":"b4a0cf7fb012b192692710c19822b5ab452b0542171398301835ea671faff80a"} Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.458037 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-48nx2" event={"ID":"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e","Type":"ContainerStarted","Data":"d382a56a9ebdc633f1e8b7689aec4d71c16daed13c3b79e10b797fdc0aeefbc8"} Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.459526 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" event={"ID":"41df3d47-ec36-4229-a1d7-2e9259527105","Type":"ContainerDied","Data":"6cb303bb59d721581e2c2c6340c192e0fb9366d7d85873bf1644ee6e2d47e3f4"} Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.459624 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwcdv" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.482589 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9rw8k" podStartSLOduration=2.48256989 podStartE2EDuration="2.48256989s" podCreationTimestamp="2026-01-31 07:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:51:53.472053812 +0000 UTC m=+945.325940161" watchObservedRunningTime="2026-01-31 07:51:53.48256989 +0000 UTC m=+945.336456249" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.492676 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e4def3-bd04-4842-8f4d-d49888336a07-operator-scripts\") pod \"keystone-db-create-9lhm4\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.492733 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de770cde-f91f-4153-bdff-54dd47878bd6-operator-scripts\") pod \"keystone-2a1e-account-create-update-s98bk\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.492764 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbbjv\" (UniqueName: \"kubernetes.io/projected/93e4def3-bd04-4842-8f4d-d49888336a07-kube-api-access-xbbjv\") pod \"keystone-db-create-9lhm4\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.492929 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m87xs\" (UniqueName: \"kubernetes.io/projected/de770cde-f91f-4153-bdff-54dd47878bd6-kube-api-access-m87xs\") pod \"keystone-2a1e-account-create-update-s98bk\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.511511 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.511554 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.516344 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwcdv"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.522656 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwcdv"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.561593 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nxksm"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.562517 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.581865 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nxksm"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.597887 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m87xs\" (UniqueName: \"kubernetes.io/projected/de770cde-f91f-4153-bdff-54dd47878bd6-kube-api-access-m87xs\") pod \"keystone-2a1e-account-create-update-s98bk\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.598200 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e4def3-bd04-4842-8f4d-d49888336a07-operator-scripts\") pod \"keystone-db-create-9lhm4\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.598312 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de770cde-f91f-4153-bdff-54dd47878bd6-operator-scripts\") pod \"keystone-2a1e-account-create-update-s98bk\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.598399 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbbjv\" (UniqueName: \"kubernetes.io/projected/93e4def3-bd04-4842-8f4d-d49888336a07-kube-api-access-xbbjv\") pod \"keystone-db-create-9lhm4\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.602352 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de770cde-f91f-4153-bdff-54dd47878bd6-operator-scripts\") pod \"keystone-2a1e-account-create-update-s98bk\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.602374 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e4def3-bd04-4842-8f4d-d49888336a07-operator-scripts\") pod \"keystone-db-create-9lhm4\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.610165 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.615211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m87xs\" (UniqueName: \"kubernetes.io/projected/de770cde-f91f-4153-bdff-54dd47878bd6-kube-api-access-m87xs\") pod \"keystone-2a1e-account-create-update-s98bk\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.618595 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbbjv\" (UniqueName: \"kubernetes.io/projected/93e4def3-bd04-4842-8f4d-d49888336a07-kube-api-access-xbbjv\") pod \"keystone-db-create-9lhm4\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.636251 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 07:51:53 crc kubenswrapper[4826]: W0131 07:51:53.651461 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17b9480e_d5e0_4478_9f5c_85caf6bb8f0a.slice/crio-356975f454eba9a6c3a514987d8573a15c283eb161cfe2c7f421da7db7acd4ac WatchSource:0}: Error finding container 356975f454eba9a6c3a514987d8573a15c283eb161cfe2c7f421da7db7acd4ac: Status 404 returned error can't find the container with id 356975f454eba9a6c3a514987d8573a15c283eb161cfe2c7f421da7db7acd4ac Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.663446 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-00b3-account-create-update-jp5gs"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.664549 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.666797 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.671008 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00b3-account-create-update-jp5gs"] Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.696181 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.700618 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526b4807-5bd0-4aff-837d-31afeb09aef6-operator-scripts\") pod \"placement-db-create-nxksm\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.700689 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv658\" (UniqueName: \"kubernetes.io/projected/526b4807-5bd0-4aff-837d-31afeb09aef6-kube-api-access-nv658\") pod \"placement-db-create-nxksm\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.702161 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9lhm4" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.749021 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.804762 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-operator-scripts\") pod \"placement-00b3-account-create-update-jp5gs\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.805153 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqj6\" (UniqueName: \"kubernetes.io/projected/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-kube-api-access-6pqj6\") pod \"placement-00b3-account-create-update-jp5gs\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.805227 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526b4807-5bd0-4aff-837d-31afeb09aef6-operator-scripts\") pod \"placement-db-create-nxksm\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.805302 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv658\" (UniqueName: \"kubernetes.io/projected/526b4807-5bd0-4aff-837d-31afeb09aef6-kube-api-access-nv658\") pod \"placement-db-create-nxksm\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.805991 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526b4807-5bd0-4aff-837d-31afeb09aef6-operator-scripts\") pod \"placement-db-create-nxksm\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.847019 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv658\" (UniqueName: \"kubernetes.io/projected/526b4807-5bd0-4aff-837d-31afeb09aef6-kube-api-access-nv658\") pod \"placement-db-create-nxksm\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.889335 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxksm" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.907451 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-operator-scripts\") pod \"placement-00b3-account-create-update-jp5gs\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.907540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pqj6\" (UniqueName: \"kubernetes.io/projected/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-kube-api-access-6pqj6\") pod \"placement-00b3-account-create-update-jp5gs\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.908904 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-operator-scripts\") pod \"placement-00b3-account-create-update-jp5gs\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.924311 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pqj6\" (UniqueName: \"kubernetes.io/projected/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-kube-api-access-6pqj6\") pod \"placement-00b3-account-create-update-jp5gs\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:53 crc kubenswrapper[4826]: I0131 07:51:53.978360 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:53.994540 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9887k"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.002269 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.009458 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9887k"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.071736 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-dfdc-account-create-update-r5qt4"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.075393 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.091948 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9lhm4"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.092413 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.100891 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dfdc-account-create-update-r5qt4"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.112059 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0832ed0-3245-4c91-875e-23f9d8307faf-operator-scripts\") pod \"glance-db-create-9887k\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.112147 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd6jf\" (UniqueName: \"kubernetes.io/projected/e0832ed0-3245-4c91-875e-23f9d8307faf-kube-api-access-nd6jf\") pod \"glance-db-create-9887k\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: W0131 07:51:54.118839 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93e4def3_bd04_4842_8f4d_d49888336a07.slice/crio-7837e99c2a647d0b088a81ec3fb1611d2538f5c0445afaf163221c337b64e7e0 WatchSource:0}: Error finding container 7837e99c2a647d0b088a81ec3fb1611d2538f5c0445afaf163221c337b64e7e0: Status 404 returned error can't find the container with id 7837e99c2a647d0b088a81ec3fb1611d2538f5c0445afaf163221c337b64e7e0 Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.210009 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2a1e-account-create-update-s98bk"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.227899 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0832ed0-3245-4c91-875e-23f9d8307faf-operator-scripts\") pod \"glance-db-create-9887k\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.228116 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536279fb-86dc-4105-aca9-31abb3917b28-operator-scripts\") pod \"glance-dfdc-account-create-update-r5qt4\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.228225 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzr5\" (UniqueName: \"kubernetes.io/projected/536279fb-86dc-4105-aca9-31abb3917b28-kube-api-access-hjzr5\") pod \"glance-dfdc-account-create-update-r5qt4\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.228328 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd6jf\" (UniqueName: \"kubernetes.io/projected/e0832ed0-3245-4c91-875e-23f9d8307faf-kube-api-access-nd6jf\") pod \"glance-db-create-9887k\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.236857 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0832ed0-3245-4c91-875e-23f9d8307faf-operator-scripts\") pod \"glance-db-create-9887k\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.239092 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nxksm"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.263453 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd6jf\" (UniqueName: \"kubernetes.io/projected/e0832ed0-3245-4c91-875e-23f9d8307faf-kube-api-access-nd6jf\") pod \"glance-db-create-9887k\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.330445 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536279fb-86dc-4105-aca9-31abb3917b28-operator-scripts\") pod \"glance-dfdc-account-create-update-r5qt4\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.330503 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjzr5\" (UniqueName: \"kubernetes.io/projected/536279fb-86dc-4105-aca9-31abb3917b28-kube-api-access-hjzr5\") pod \"glance-dfdc-account-create-update-r5qt4\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.332033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536279fb-86dc-4105-aca9-31abb3917b28-operator-scripts\") pod \"glance-dfdc-account-create-update-r5qt4\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.348028 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjzr5\" (UniqueName: \"kubernetes.io/projected/536279fb-86dc-4105-aca9-31abb3917b28-kube-api-access-hjzr5\") pod \"glance-dfdc-account-create-update-r5qt4\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.407311 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9887k" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.444063 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.466234 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9lhm4" event={"ID":"93e4def3-bd04-4842-8f4d-d49888336a07","Type":"ContainerStarted","Data":"7837e99c2a647d0b088a81ec3fb1611d2538f5c0445afaf163221c337b64e7e0"} Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.467042 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a","Type":"ContainerStarted","Data":"356975f454eba9a6c3a514987d8573a15c283eb161cfe2c7f421da7db7acd4ac"} Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.467750 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a1e-account-create-update-s98bk" event={"ID":"de770cde-f91f-4153-bdff-54dd47878bd6","Type":"ContainerStarted","Data":"7878d5553cabbf3d86b327ccc9304aaaadb3b6bb50e60fdd06425040c7b8f96f"} Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.469020 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxksm" event={"ID":"526b4807-5bd0-4aff-837d-31afeb09aef6","Type":"ContainerStarted","Data":"a4a59b9ecaf8750d4ebc74db49a49116aadbcab4e35ed3ae94542bf60f2cb056"} Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.540626 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00b3-account-create-update-jp5gs"] Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.590347 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.821034 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41df3d47-ec36-4229-a1d7-2e9259527105" path="/var/lib/kubelet/pods/41df3d47-ec36-4229-a1d7-2e9259527105/volumes" Jan 31 07:51:54 crc kubenswrapper[4826]: I0131 07:51:54.870306 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9887k"] Jan 31 07:51:55 crc kubenswrapper[4826]: W0131 07:51:55.040621 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536279fb_86dc_4105_aca9_31abb3917b28.slice/crio-7af736efe37df775f77dce9ff897640484dc08f7a8913b65ee6c9d98e80c6fa0 WatchSource:0}: Error finding container 7af736efe37df775f77dce9ff897640484dc08f7a8913b65ee6c9d98e80c6fa0: Status 404 returned error can't find the container with id 7af736efe37df775f77dce9ff897640484dc08f7a8913b65ee6c9d98e80c6fa0 Jan 31 07:51:55 crc kubenswrapper[4826]: I0131 07:51:55.042649 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dfdc-account-create-update-r5qt4"] Jan 31 07:51:55 crc kubenswrapper[4826]: I0131 07:51:55.476286 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a1e-account-create-update-s98bk" event={"ID":"de770cde-f91f-4153-bdff-54dd47878bd6","Type":"ContainerStarted","Data":"5bd3f229b8e2e4c9ff15c41a2298f019afb897ab3cfc3ed3b389de797d9adb56"} Jan 31 07:51:55 crc kubenswrapper[4826]: I0131 07:51:55.477247 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfdc-account-create-update-r5qt4" event={"ID":"536279fb-86dc-4105-aca9-31abb3917b28","Type":"ContainerStarted","Data":"7af736efe37df775f77dce9ff897640484dc08f7a8913b65ee6c9d98e80c6fa0"} Jan 31 07:51:55 crc kubenswrapper[4826]: I0131 07:51:55.478610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9887k" event={"ID":"e0832ed0-3245-4c91-875e-23f9d8307faf","Type":"ContainerStarted","Data":"d5fce0b920aebb899b125a45ba2da92b12dc3ac91131541d613b5641781f6fa7"} Jan 31 07:51:55 crc kubenswrapper[4826]: I0131 07:51:55.488905 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00b3-account-create-update-jp5gs" event={"ID":"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3","Type":"ContainerStarted","Data":"22f5cd160389b55e5db39c51327bd2e26ca3d20e0dbaf8cb04017aa91ee8683d"} Jan 31 07:51:55 crc kubenswrapper[4826]: I0131 07:51:55.758071 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 07:51:56 crc kubenswrapper[4826]: I0131 07:51:56.495687 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxksm" event={"ID":"526b4807-5bd0-4aff-837d-31afeb09aef6","Type":"ContainerStarted","Data":"8f5fe60153e2312dfd5e3ca4b33183a6b476b1bf5b914d47dfda155e8840e867"} Jan 31 07:51:57 crc kubenswrapper[4826]: I0131 07:51:57.377296 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:51:57 crc kubenswrapper[4826]: I0131 07:51:57.377357 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:51:57 crc kubenswrapper[4826]: I0131 07:51:57.505867 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00b3-account-create-update-jp5gs" event={"ID":"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3","Type":"ContainerStarted","Data":"262fc9596d04fed9b91317f97fd2d0d40e79827140150552924c9d2f891d07af"} Jan 31 07:51:57 crc kubenswrapper[4826]: I0131 07:51:57.507531 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9lhm4" event={"ID":"93e4def3-bd04-4842-8f4d-d49888336a07","Type":"ContainerStarted","Data":"8a49cc9d65af835722f8fe763b866f005db4f6b46dd562dd31848857f030af8f"} Jan 31 07:51:58 crc kubenswrapper[4826]: I0131 07:51:58.558599 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9887k" event={"ID":"e0832ed0-3245-4c91-875e-23f9d8307faf","Type":"ContainerStarted","Data":"afba20f465478d29979122966484b03f9ba0617a3db22a81e9a1a76909d1bde5"} Jan 31 07:51:58 crc kubenswrapper[4826]: I0131 07:51:58.560352 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfdc-account-create-update-r5qt4" event={"ID":"536279fb-86dc-4105-aca9-31abb3917b28","Type":"ContainerStarted","Data":"d459fb0b2295af3265683150de512cd5389436fe306fda22e6f2e61d8fdcaf93"} Jan 31 07:51:58 crc kubenswrapper[4826]: I0131 07:51:58.578604 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-9887k" podStartSLOduration=5.57858515 podStartE2EDuration="5.57858515s" podCreationTimestamp="2026-01-31 07:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:51:58.572268807 +0000 UTC m=+950.426155166" watchObservedRunningTime="2026-01-31 07:51:58.57858515 +0000 UTC m=+950.432471509" Jan 31 07:51:58 crc kubenswrapper[4826]: I0131 07:51:58.585998 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9lhm4" podStartSLOduration=5.585978013 podStartE2EDuration="5.585978013s" podCreationTimestamp="2026-01-31 07:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:51:58.58549897 +0000 UTC m=+950.439385329" watchObservedRunningTime="2026-01-31 07:51:58.585978013 +0000 UTC m=+950.439864372" Jan 31 07:51:58 crc kubenswrapper[4826]: I0131 07:51:58.623584 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-00b3-account-create-update-jp5gs" podStartSLOduration=5.623566615 podStartE2EDuration="5.623566615s" podCreationTimestamp="2026-01-31 07:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:51:58.611135124 +0000 UTC m=+950.465021483" watchObservedRunningTime="2026-01-31 07:51:58.623566615 +0000 UTC m=+950.477452974" Jan 31 07:51:58 crc kubenswrapper[4826]: I0131 07:51:58.627984 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2a1e-account-create-update-s98bk" podStartSLOduration=5.627957376 podStartE2EDuration="5.627957376s" podCreationTimestamp="2026-01-31 07:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:51:58.622991619 +0000 UTC m=+950.476877978" watchObservedRunningTime="2026-01-31 07:51:58.627957376 +0000 UTC m=+950.481843725" Jan 31 07:51:59 crc kubenswrapper[4826]: I0131 07:51:59.583066 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-dfdc-account-create-update-r5qt4" podStartSLOduration=5.583045144 podStartE2EDuration="5.583045144s" podCreationTimestamp="2026-01-31 07:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:51:59.579045644 +0000 UTC m=+951.432932013" watchObservedRunningTime="2026-01-31 07:51:59.583045144 +0000 UTC m=+951.436931503" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.753655 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l9n8d"] Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.755332 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.758434 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.765152 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l9n8d"] Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.890927 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnb57\" (UniqueName: \"kubernetes.io/projected/6c849167-a9b5-4d73-a339-d1e8c8df922b-kube-api-access-jnb57\") pod \"root-account-create-update-l9n8d\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.891041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c849167-a9b5-4d73-a339-d1e8c8df922b-operator-scripts\") pod \"root-account-create-update-l9n8d\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.992238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnb57\" (UniqueName: \"kubernetes.io/projected/6c849167-a9b5-4d73-a339-d1e8c8df922b-kube-api-access-jnb57\") pod \"root-account-create-update-l9n8d\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.992301 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c849167-a9b5-4d73-a339-d1e8c8df922b-operator-scripts\") pod \"root-account-create-update-l9n8d\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:00 crc kubenswrapper[4826]: I0131 07:52:00.993049 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c849167-a9b5-4d73-a339-d1e8c8df922b-operator-scripts\") pod \"root-account-create-update-l9n8d\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:01 crc kubenswrapper[4826]: I0131 07:52:01.011420 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnb57\" (UniqueName: \"kubernetes.io/projected/6c849167-a9b5-4d73-a339-d1e8c8df922b-kube-api-access-jnb57\") pod \"root-account-create-update-l9n8d\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:01 crc kubenswrapper[4826]: I0131 07:52:01.120890 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:01 crc kubenswrapper[4826]: I0131 07:52:01.637159 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nxksm" podStartSLOduration=8.63712539 podStartE2EDuration="8.63712539s" podCreationTimestamp="2026-01-31 07:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:01.623097205 +0000 UTC m=+953.476983604" watchObservedRunningTime="2026-01-31 07:52:01.63712539 +0000 UTC m=+953.491011789" Jan 31 07:52:04 crc kubenswrapper[4826]: I0131 07:52:04.614513 4826 generic.go:334] "Generic (PLEG): container finished" podID="526b4807-5bd0-4aff-837d-31afeb09aef6" containerID="8f5fe60153e2312dfd5e3ca4b33183a6b476b1bf5b914d47dfda155e8840e867" exitCode=0 Jan 31 07:52:04 crc kubenswrapper[4826]: I0131 07:52:04.614628 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxksm" event={"ID":"526b4807-5bd0-4aff-837d-31afeb09aef6","Type":"ContainerDied","Data":"8f5fe60153e2312dfd5e3ca4b33183a6b476b1bf5b914d47dfda155e8840e867"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.630685 4826 generic.go:334] "Generic (PLEG): container finished" podID="536279fb-86dc-4105-aca9-31abb3917b28" containerID="d459fb0b2295af3265683150de512cd5389436fe306fda22e6f2e61d8fdcaf93" exitCode=0 Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.630773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfdc-account-create-update-r5qt4" event={"ID":"536279fb-86dc-4105-aca9-31abb3917b28","Type":"ContainerDied","Data":"d459fb0b2295af3265683150de512cd5389436fe306fda22e6f2e61d8fdcaf93"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.634656 4826 generic.go:334] "Generic (PLEG): container finished" podID="e0832ed0-3245-4c91-875e-23f9d8307faf" containerID="afba20f465478d29979122966484b03f9ba0617a3db22a81e9a1a76909d1bde5" exitCode=0 Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.634711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9887k" event={"ID":"e0832ed0-3245-4c91-875e-23f9d8307faf","Type":"ContainerDied","Data":"afba20f465478d29979122966484b03f9ba0617a3db22a81e9a1a76909d1bde5"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.637227 4826 generic.go:334] "Generic (PLEG): container finished" podID="6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" containerID="262fc9596d04fed9b91317f97fd2d0d40e79827140150552924c9d2f891d07af" exitCode=0 Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.637286 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00b3-account-create-update-jp5gs" event={"ID":"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3","Type":"ContainerDied","Data":"262fc9596d04fed9b91317f97fd2d0d40e79827140150552924c9d2f891d07af"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.638943 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxksm" event={"ID":"526b4807-5bd0-4aff-837d-31afeb09aef6","Type":"ContainerDied","Data":"a4a59b9ecaf8750d4ebc74db49a49116aadbcab4e35ed3ae94542bf60f2cb056"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.638992 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a59b9ecaf8750d4ebc74db49a49116aadbcab4e35ed3ae94542bf60f2cb056" Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.640915 4826 generic.go:334] "Generic (PLEG): container finished" podID="93e4def3-bd04-4842-8f4d-d49888336a07" containerID="8a49cc9d65af835722f8fe763b866f005db4f6b46dd562dd31848857f030af8f" exitCode=0 Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.640984 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9lhm4" event={"ID":"93e4def3-bd04-4842-8f4d-d49888336a07","Type":"ContainerDied","Data":"8a49cc9d65af835722f8fe763b866f005db4f6b46dd562dd31848857f030af8f"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.644877 4826 generic.go:334] "Generic (PLEG): container finished" podID="de770cde-f91f-4153-bdff-54dd47878bd6" containerID="5bd3f229b8e2e4c9ff15c41a2298f019afb897ab3cfc3ed3b389de797d9adb56" exitCode=0 Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.645073 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a1e-account-create-update-s98bk" event={"ID":"de770cde-f91f-4153-bdff-54dd47878bd6","Type":"ContainerDied","Data":"5bd3f229b8e2e4c9ff15c41a2298f019afb897ab3cfc3ed3b389de797d9adb56"} Jan 31 07:52:06 crc kubenswrapper[4826]: I0131 07:52:06.830745 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l9n8d"] Jan 31 07:52:06 crc kubenswrapper[4826]: W0131 07:52:06.837912 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c849167_a9b5_4d73_a339_d1e8c8df922b.slice/crio-494bc1c77e3f5a3ce2cce6f0167e34bd42e4494a2a0cb3a44e42e1b9dbe279cd WatchSource:0}: Error finding container 494bc1c77e3f5a3ce2cce6f0167e34bd42e4494a2a0cb3a44e42e1b9dbe279cd: Status 404 returned error can't find the container with id 494bc1c77e3f5a3ce2cce6f0167e34bd42e4494a2a0cb3a44e42e1b9dbe279cd Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.002743 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxksm" Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.098283 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526b4807-5bd0-4aff-837d-31afeb09aef6-operator-scripts\") pod \"526b4807-5bd0-4aff-837d-31afeb09aef6\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.098372 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv658\" (UniqueName: \"kubernetes.io/projected/526b4807-5bd0-4aff-837d-31afeb09aef6-kube-api-access-nv658\") pod \"526b4807-5bd0-4aff-837d-31afeb09aef6\" (UID: \"526b4807-5bd0-4aff-837d-31afeb09aef6\") " Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.099773 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526b4807-5bd0-4aff-837d-31afeb09aef6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "526b4807-5bd0-4aff-837d-31afeb09aef6" (UID: "526b4807-5bd0-4aff-837d-31afeb09aef6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.106898 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526b4807-5bd0-4aff-837d-31afeb09aef6-kube-api-access-nv658" (OuterVolumeSpecName: "kube-api-access-nv658") pod "526b4807-5bd0-4aff-837d-31afeb09aef6" (UID: "526b4807-5bd0-4aff-837d-31afeb09aef6"). InnerVolumeSpecName "kube-api-access-nv658". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.200307 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526b4807-5bd0-4aff-837d-31afeb09aef6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.200664 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv658\" (UniqueName: \"kubernetes.io/projected/526b4807-5bd0-4aff-837d-31afeb09aef6-kube-api-access-nv658\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.654748 4826 generic.go:334] "Generic (PLEG): container finished" podID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerID="227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d" exitCode=0 Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.654802 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" event={"ID":"dd8eba63-06f5-46aa-abfa-d0ec1b11725f","Type":"ContainerDied","Data":"227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d"} Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.656403 4826 generic.go:334] "Generic (PLEG): container finished" podID="6c849167-a9b5-4d73-a339-d1e8c8df922b" containerID="50b3a5f2a093b9eff97b8b9c850b0d539cc6e809d924ecc0d37767416e36d7fd" exitCode=0 Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.656461 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l9n8d" event={"ID":"6c849167-a9b5-4d73-a339-d1e8c8df922b","Type":"ContainerDied","Data":"50b3a5f2a093b9eff97b8b9c850b0d539cc6e809d924ecc0d37767416e36d7fd"} Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.656500 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l9n8d" event={"ID":"6c849167-a9b5-4d73-a339-d1e8c8df922b","Type":"ContainerStarted","Data":"494bc1c77e3f5a3ce2cce6f0167e34bd42e4494a2a0cb3a44e42e1b9dbe279cd"} Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.657883 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerID="3a37e83328b74a9d37b99f4969e9b50b4162ebc76fc5fdf14b8fabcfd6a2d426" exitCode=0 Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.657959 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-48nx2" event={"ID":"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e","Type":"ContainerDied","Data":"3a37e83328b74a9d37b99f4969e9b50b4162ebc76fc5fdf14b8fabcfd6a2d426"} Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.660885 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a","Type":"ContainerStarted","Data":"8d8c239596f0a349369753bec4a3820a54f9031cadb9b8380c33fd0a5bf6bb7b"} Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.660915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17b9480e-d5e0-4478-9f5c-85caf6bb8f0a","Type":"ContainerStarted","Data":"8dc980c9fec8972c22184d94a5d89afd51c5f1a8a35be30d38bdc5c263a84e52"} Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.661083 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxksm" Jan 31 07:52:07 crc kubenswrapper[4826]: I0131 07:52:07.756953 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.082405127 podStartE2EDuration="15.756931305s" podCreationTimestamp="2026-01-31 07:51:52 +0000 UTC" firstStartedPulling="2026-01-31 07:51:53.653703978 +0000 UTC m=+945.507590337" lastFinishedPulling="2026-01-31 07:52:06.328230146 +0000 UTC m=+958.182116515" observedRunningTime="2026-01-31 07:52:07.753630885 +0000 UTC m=+959.607517244" watchObservedRunningTime="2026-01-31 07:52:07.756931305 +0000 UTC m=+959.610817664" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.647641 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9887k" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.658635 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0832ed0-3245-4c91-875e-23f9d8307faf-operator-scripts\") pod \"e0832ed0-3245-4c91-875e-23f9d8307faf\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.658790 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd6jf\" (UniqueName: \"kubernetes.io/projected/e0832ed0-3245-4c91-875e-23f9d8307faf-kube-api-access-nd6jf\") pod \"e0832ed0-3245-4c91-875e-23f9d8307faf\" (UID: \"e0832ed0-3245-4c91-875e-23f9d8307faf\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.659358 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0832ed0-3245-4c91-875e-23f9d8307faf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0832ed0-3245-4c91-875e-23f9d8307faf" (UID: "e0832ed0-3245-4c91-875e-23f9d8307faf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.674309 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0832ed0-3245-4c91-875e-23f9d8307faf-kube-api-access-nd6jf" (OuterVolumeSpecName: "kube-api-access-nd6jf") pod "e0832ed0-3245-4c91-875e-23f9d8307faf" (UID: "e0832ed0-3245-4c91-875e-23f9d8307faf"). InnerVolumeSpecName "kube-api-access-nd6jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.697952 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-48nx2" event={"ID":"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e","Type":"ContainerStarted","Data":"49c43005e37ee6e1066f09aa88a17e23bf4055b3c6cab8884b1861455bd0db7e"} Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.698286 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.702937 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9887k" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.703034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9887k" event={"ID":"e0832ed0-3245-4c91-875e-23f9d8307faf","Type":"ContainerDied","Data":"d5fce0b920aebb899b125a45ba2da92b12dc3ac91131541d613b5641781f6fa7"} Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.703064 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5fce0b920aebb899b125a45ba2da92b12dc3ac91131541d613b5641781f6fa7" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.710834 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" event={"ID":"dd8eba63-06f5-46aa-abfa-d0ec1b11725f","Type":"ContainerStarted","Data":"1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b"} Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.710889 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.712213 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.724911 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-48nx2" podStartSLOduration=4.450390488 podStartE2EDuration="17.724895396s" podCreationTimestamp="2026-01-31 07:51:51 +0000 UTC" firstStartedPulling="2026-01-31 07:51:53.05343803 +0000 UTC m=+944.907324389" lastFinishedPulling="2026-01-31 07:52:06.327942898 +0000 UTC m=+958.181829297" observedRunningTime="2026-01-31 07:52:08.724506025 +0000 UTC m=+960.578392384" watchObservedRunningTime="2026-01-31 07:52:08.724895396 +0000 UTC m=+960.578781755" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.750661 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" podStartSLOduration=4.392081788 podStartE2EDuration="17.750633603s" podCreationTimestamp="2026-01-31 07:51:51 +0000 UTC" firstStartedPulling="2026-01-31 07:51:52.967652005 +0000 UTC m=+944.821538364" lastFinishedPulling="2026-01-31 07:52:06.32620382 +0000 UTC m=+958.180090179" observedRunningTime="2026-01-31 07:52:08.742009806 +0000 UTC m=+960.595896165" watchObservedRunningTime="2026-01-31 07:52:08.750633603 +0000 UTC m=+960.604519972" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.765374 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0832ed0-3245-4c91-875e-23f9d8307faf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.765409 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd6jf\" (UniqueName: \"kubernetes.io/projected/e0832ed0-3245-4c91-875e-23f9d8307faf-kube-api-access-nd6jf\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.936004 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.943474 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9lhm4" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.953562 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.969707 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbbjv\" (UniqueName: \"kubernetes.io/projected/93e4def3-bd04-4842-8f4d-d49888336a07-kube-api-access-xbbjv\") pod \"93e4def3-bd04-4842-8f4d-d49888336a07\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.969772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pqj6\" (UniqueName: \"kubernetes.io/projected/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-kube-api-access-6pqj6\") pod \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.969824 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e4def3-bd04-4842-8f4d-d49888336a07-operator-scripts\") pod \"93e4def3-bd04-4842-8f4d-d49888336a07\" (UID: \"93e4def3-bd04-4842-8f4d-d49888336a07\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.969901 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m87xs\" (UniqueName: \"kubernetes.io/projected/de770cde-f91f-4153-bdff-54dd47878bd6-kube-api-access-m87xs\") pod \"de770cde-f91f-4153-bdff-54dd47878bd6\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.969985 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de770cde-f91f-4153-bdff-54dd47878bd6-operator-scripts\") pod \"de770cde-f91f-4153-bdff-54dd47878bd6\" (UID: \"de770cde-f91f-4153-bdff-54dd47878bd6\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.970059 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-operator-scripts\") pod \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\" (UID: \"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3\") " Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.971565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" (UID: "6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.974296 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e4def3-bd04-4842-8f4d-d49888336a07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93e4def3-bd04-4842-8f4d-d49888336a07" (UID: "93e4def3-bd04-4842-8f4d-d49888336a07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.974727 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de770cde-f91f-4153-bdff-54dd47878bd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de770cde-f91f-4153-bdff-54dd47878bd6" (UID: "de770cde-f91f-4153-bdff-54dd47878bd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.974868 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.980741 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e4def3-bd04-4842-8f4d-d49888336a07-kube-api-access-xbbjv" (OuterVolumeSpecName: "kube-api-access-xbbjv") pod "93e4def3-bd04-4842-8f4d-d49888336a07" (UID: "93e4def3-bd04-4842-8f4d-d49888336a07"). InnerVolumeSpecName "kube-api-access-xbbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.981236 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-kube-api-access-6pqj6" (OuterVolumeSpecName: "kube-api-access-6pqj6") pod "6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" (UID: "6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3"). InnerVolumeSpecName "kube-api-access-6pqj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:08 crc kubenswrapper[4826]: I0131 07:52:08.983696 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de770cde-f91f-4153-bdff-54dd47878bd6-kube-api-access-m87xs" (OuterVolumeSpecName: "kube-api-access-m87xs") pod "de770cde-f91f-4153-bdff-54dd47878bd6" (UID: "de770cde-f91f-4153-bdff-54dd47878bd6"). InnerVolumeSpecName "kube-api-access-m87xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.071832 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzr5\" (UniqueName: \"kubernetes.io/projected/536279fb-86dc-4105-aca9-31abb3917b28-kube-api-access-hjzr5\") pod \"536279fb-86dc-4105-aca9-31abb3917b28\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072252 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536279fb-86dc-4105-aca9-31abb3917b28-operator-scripts\") pod \"536279fb-86dc-4105-aca9-31abb3917b28\" (UID: \"536279fb-86dc-4105-aca9-31abb3917b28\") " Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072634 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbbjv\" (UniqueName: \"kubernetes.io/projected/93e4def3-bd04-4842-8f4d-d49888336a07-kube-api-access-xbbjv\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072658 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pqj6\" (UniqueName: \"kubernetes.io/projected/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-kube-api-access-6pqj6\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072672 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e4def3-bd04-4842-8f4d-d49888336a07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072684 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m87xs\" (UniqueName: \"kubernetes.io/projected/de770cde-f91f-4153-bdff-54dd47878bd6-kube-api-access-m87xs\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072696 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de770cde-f91f-4153-bdff-54dd47878bd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.072707 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.073055 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536279fb-86dc-4105-aca9-31abb3917b28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "536279fb-86dc-4105-aca9-31abb3917b28" (UID: "536279fb-86dc-4105-aca9-31abb3917b28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.097389 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536279fb-86dc-4105-aca9-31abb3917b28-kube-api-access-hjzr5" (OuterVolumeSpecName: "kube-api-access-hjzr5") pod "536279fb-86dc-4105-aca9-31abb3917b28" (UID: "536279fb-86dc-4105-aca9-31abb3917b28"). InnerVolumeSpecName "kube-api-access-hjzr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.119715 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.173672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c849167-a9b5-4d73-a339-d1e8c8df922b-operator-scripts\") pod \"6c849167-a9b5-4d73-a339-d1e8c8df922b\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.173829 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnb57\" (UniqueName: \"kubernetes.io/projected/6c849167-a9b5-4d73-a339-d1e8c8df922b-kube-api-access-jnb57\") pod \"6c849167-a9b5-4d73-a339-d1e8c8df922b\" (UID: \"6c849167-a9b5-4d73-a339-d1e8c8df922b\") " Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.174464 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjzr5\" (UniqueName: \"kubernetes.io/projected/536279fb-86dc-4105-aca9-31abb3917b28-kube-api-access-hjzr5\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.174485 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536279fb-86dc-4105-aca9-31abb3917b28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.176676 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c849167-a9b5-4d73-a339-d1e8c8df922b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c849167-a9b5-4d73-a339-d1e8c8df922b" (UID: "6c849167-a9b5-4d73-a339-d1e8c8df922b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.179273 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c849167-a9b5-4d73-a339-d1e8c8df922b-kube-api-access-jnb57" (OuterVolumeSpecName: "kube-api-access-jnb57") pod "6c849167-a9b5-4d73-a339-d1e8c8df922b" (UID: "6c849167-a9b5-4d73-a339-d1e8c8df922b"). InnerVolumeSpecName "kube-api-access-jnb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.275766 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c849167-a9b5-4d73-a339-d1e8c8df922b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.275795 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnb57\" (UniqueName: \"kubernetes.io/projected/6c849167-a9b5-4d73-a339-d1e8c8df922b-kube-api-access-jnb57\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.719254 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfdc-account-create-update-r5qt4" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.719245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfdc-account-create-update-r5qt4" event={"ID":"536279fb-86dc-4105-aca9-31abb3917b28","Type":"ContainerDied","Data":"7af736efe37df775f77dce9ff897640484dc08f7a8913b65ee6c9d98e80c6fa0"} Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.719434 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af736efe37df775f77dce9ff897640484dc08f7a8913b65ee6c9d98e80c6fa0" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.720439 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00b3-account-create-update-jp5gs" event={"ID":"6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3","Type":"ContainerDied","Data":"22f5cd160389b55e5db39c51327bd2e26ca3d20e0dbaf8cb04017aa91ee8683d"} Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.720455 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22f5cd160389b55e5db39c51327bd2e26ca3d20e0dbaf8cb04017aa91ee8683d" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.720496 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00b3-account-create-update-jp5gs" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.722875 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9lhm4" event={"ID":"93e4def3-bd04-4842-8f4d-d49888336a07","Type":"ContainerDied","Data":"7837e99c2a647d0b088a81ec3fb1611d2538f5c0445afaf163221c337b64e7e0"} Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.722911 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7837e99c2a647d0b088a81ec3fb1611d2538f5c0445afaf163221c337b64e7e0" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.722958 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9lhm4" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.726320 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2a1e-account-create-update-s98bk" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.726307 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2a1e-account-create-update-s98bk" event={"ID":"de770cde-f91f-4153-bdff-54dd47878bd6","Type":"ContainerDied","Data":"7878d5553cabbf3d86b327ccc9304aaaadb3b6bb50e60fdd06425040c7b8f96f"} Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.726502 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7878d5553cabbf3d86b327ccc9304aaaadb3b6bb50e60fdd06425040c7b8f96f" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.728076 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l9n8d" event={"ID":"6c849167-a9b5-4d73-a339-d1e8c8df922b","Type":"ContainerDied","Data":"494bc1c77e3f5a3ce2cce6f0167e34bd42e4494a2a0cb3a44e42e1b9dbe279cd"} Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.728126 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l9n8d" Jan 31 07:52:09 crc kubenswrapper[4826]: I0131 07:52:09.728166 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="494bc1c77e3f5a3ce2cce6f0167e34bd42e4494a2a0cb3a44e42e1b9dbe279cd" Jan 31 07:52:12 crc kubenswrapper[4826]: I0131 07:52:12.201560 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l9n8d"] Jan 31 07:52:12 crc kubenswrapper[4826]: I0131 07:52:12.207813 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l9n8d"] Jan 31 07:52:12 crc kubenswrapper[4826]: I0131 07:52:12.816787 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c849167-a9b5-4d73-a339-d1e8c8df922b" path="/var/lib/kubelet/pods/6c849167-a9b5-4d73-a339-d1e8c8df922b/volumes" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.207924 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-x66vb"] Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208247 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de770cde-f91f-4153-bdff-54dd47878bd6" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208260 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="de770cde-f91f-4153-bdff-54dd47878bd6" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208275 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e4def3-bd04-4842-8f4d-d49888336a07" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208281 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e4def3-bd04-4842-8f4d-d49888336a07" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208311 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208317 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208332 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0832ed0-3245-4c91-875e-23f9d8307faf" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208337 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0832ed0-3245-4c91-875e-23f9d8307faf" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208351 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c849167-a9b5-4d73-a339-d1e8c8df922b" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208357 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c849167-a9b5-4d73-a339-d1e8c8df922b" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208367 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536279fb-86dc-4105-aca9-31abb3917b28" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208372 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="536279fb-86dc-4105-aca9-31abb3917b28" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: E0131 07:52:14.208383 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526b4807-5bd0-4aff-837d-31afeb09aef6" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208389 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="526b4807-5bd0-4aff-837d-31afeb09aef6" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208513 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="de770cde-f91f-4153-bdff-54dd47878bd6" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208526 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0832ed0-3245-4c91-875e-23f9d8307faf" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208534 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208545 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="536279fb-86dc-4105-aca9-31abb3917b28" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208556 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="526b4807-5bd0-4aff-837d-31afeb09aef6" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208568 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e4def3-bd04-4842-8f4d-d49888336a07" containerName="mariadb-database-create" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.208575 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c849167-a9b5-4d73-a339-d1e8c8df922b" containerName="mariadb-account-create-update" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.209036 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.219772 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.223687 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vxvqq" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.234636 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x66vb"] Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.347927 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/4a716e83-1782-4038-b26c-7a2d7ed6095d-kube-api-access-m588r\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.348063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-config-data\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.348083 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-combined-ca-bundle\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.348147 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-db-sync-config-data\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.449594 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-db-sync-config-data\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.449669 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/4a716e83-1782-4038-b26c-7a2d7ed6095d-kube-api-access-m588r\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.449719 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-config-data\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.449736 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-combined-ca-bundle\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.455541 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-db-sync-config-data\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.455638 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-config-data\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.455711 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-combined-ca-bundle\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.465818 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/4a716e83-1782-4038-b26c-7a2d7ed6095d-kube-api-access-m588r\") pod \"glance-db-sync-x66vb\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.533825 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.767890 4826 generic.go:334] "Generic (PLEG): container finished" podID="bf30cab9-089e-40db-ab76-5416de684a26" containerID="dcfb3d648c3567e21a9f889b5b49f845d02222028e044f8df7cc9166df845e27" exitCode=0 Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.767957 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf30cab9-089e-40db-ab76-5416de684a26","Type":"ContainerDied","Data":"dcfb3d648c3567e21a9f889b5b49f845d02222028e044f8df7cc9166df845e27"} Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.769474 4826 generic.go:334] "Generic (PLEG): container finished" podID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerID="1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e" exitCode=0 Jan 31 07:52:14 crc kubenswrapper[4826]: I0131 07:52:14.769530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"548eef53-f0eb-46fd-a66d-12825c7c8f67","Type":"ContainerDied","Data":"1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e"} Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.039094 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x66vb"] Jan 31 07:52:15 crc kubenswrapper[4826]: W0131 07:52:15.041382 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a716e83_1782_4038_b26c_7a2d7ed6095d.slice/crio-477cb2a78d9cd03b5dc1242735cc9993be84eaa81b2e2f4d3bba71705f1acb3c WatchSource:0}: Error finding container 477cb2a78d9cd03b5dc1242735cc9993be84eaa81b2e2f4d3bba71705f1acb3c: Status 404 returned error can't find the container with id 477cb2a78d9cd03b5dc1242735cc9993be84eaa81b2e2f4d3bba71705f1acb3c Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.791785 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"548eef53-f0eb-46fd-a66d-12825c7c8f67","Type":"ContainerStarted","Data":"b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb"} Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.792557 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.794149 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x66vb" event={"ID":"4a716e83-1782-4038-b26c-7a2d7ed6095d","Type":"ContainerStarted","Data":"477cb2a78d9cd03b5dc1242735cc9993be84eaa81b2e2f4d3bba71705f1acb3c"} Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.798081 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf30cab9-089e-40db-ab76-5416de684a26","Type":"ContainerStarted","Data":"37896a40d5c6fc30227a53b13920d80f84d208437446aa2396b2ca0879cf2c7c"} Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.798690 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.825147 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.633240199 podStartE2EDuration="56.825128734s" podCreationTimestamp="2026-01-31 07:51:19 +0000 UTC" firstStartedPulling="2026-01-31 07:51:21.448258808 +0000 UTC m=+913.302145167" lastFinishedPulling="2026-01-31 07:51:40.640147343 +0000 UTC m=+932.494033702" observedRunningTime="2026-01-31 07:52:15.821198676 +0000 UTC m=+967.675085025" watchObservedRunningTime="2026-01-31 07:52:15.825128734 +0000 UTC m=+967.679015093" Jan 31 07:52:15 crc kubenswrapper[4826]: I0131 07:52:15.852762 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.851575233 podStartE2EDuration="56.852742972s" podCreationTimestamp="2026-01-31 07:51:19 +0000 UTC" firstStartedPulling="2026-01-31 07:51:21.600658262 +0000 UTC m=+913.454544621" lastFinishedPulling="2026-01-31 07:51:40.601826001 +0000 UTC m=+932.455712360" observedRunningTime="2026-01-31 07:52:15.845254106 +0000 UTC m=+967.699140495" watchObservedRunningTime="2026-01-31 07:52:15.852742972 +0000 UTC m=+967.706629331" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.118249 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.215635 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fbq54"] Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.237143 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.240146 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.240360 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fbq54"] Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.401420 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n55s\" (UniqueName: \"kubernetes.io/projected/8e32372c-7dea-4ff2-84d9-d49002bc57d1-kube-api-access-7n55s\") pod \"root-account-create-update-fbq54\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.401512 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e32372c-7dea-4ff2-84d9-d49002bc57d1-operator-scripts\") pod \"root-account-create-update-fbq54\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.406051 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.479405 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-7msrb"] Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.502923 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e32372c-7dea-4ff2-84d9-d49002bc57d1-operator-scripts\") pod \"root-account-create-update-fbq54\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.503059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n55s\" (UniqueName: \"kubernetes.io/projected/8e32372c-7dea-4ff2-84d9-d49002bc57d1-kube-api-access-7n55s\") pod \"root-account-create-update-fbq54\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.504246 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e32372c-7dea-4ff2-84d9-d49002bc57d1-operator-scripts\") pod \"root-account-create-update-fbq54\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.529126 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n55s\" (UniqueName: \"kubernetes.io/projected/8e32372c-7dea-4ff2-84d9-d49002bc57d1-kube-api-access-7n55s\") pod \"root-account-create-update-fbq54\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.559932 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:17 crc kubenswrapper[4826]: I0131 07:52:17.816313 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerName="dnsmasq-dns" containerID="cri-o://1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b" gracePeriod=10 Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.032746 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fbq54"] Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.276111 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.420131 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-ovsdbserver-nb\") pod \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.420250 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-config\") pod \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.420285 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqf5\" (UniqueName: \"kubernetes.io/projected/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-kube-api-access-ksqf5\") pod \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.420397 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-dns-svc\") pod \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\" (UID: \"dd8eba63-06f5-46aa-abfa-d0ec1b11725f\") " Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.425690 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-kube-api-access-ksqf5" (OuterVolumeSpecName: "kube-api-access-ksqf5") pod "dd8eba63-06f5-46aa-abfa-d0ec1b11725f" (UID: "dd8eba63-06f5-46aa-abfa-d0ec1b11725f"). InnerVolumeSpecName "kube-api-access-ksqf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.462473 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd8eba63-06f5-46aa-abfa-d0ec1b11725f" (UID: "dd8eba63-06f5-46aa-abfa-d0ec1b11725f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.466511 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-config" (OuterVolumeSpecName: "config") pod "dd8eba63-06f5-46aa-abfa-d0ec1b11725f" (UID: "dd8eba63-06f5-46aa-abfa-d0ec1b11725f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.478173 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd8eba63-06f5-46aa-abfa-d0ec1b11725f" (UID: "dd8eba63-06f5-46aa-abfa-d0ec1b11725f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.522282 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.522312 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.522325 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksqf5\" (UniqueName: \"kubernetes.io/projected/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-kube-api-access-ksqf5\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.522339 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd8eba63-06f5-46aa-abfa-d0ec1b11725f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.830649 4826 generic.go:334] "Generic (PLEG): container finished" podID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerID="1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b" exitCode=0 Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.830711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" event={"ID":"dd8eba63-06f5-46aa-abfa-d0ec1b11725f","Type":"ContainerDied","Data":"1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b"} Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.830731 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" event={"ID":"dd8eba63-06f5-46aa-abfa-d0ec1b11725f","Type":"ContainerDied","Data":"692f7115e0c9c74fa67669574e035655add89f9467e67dc8283e289730f17ba3"} Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.830749 4826 scope.go:117] "RemoveContainer" containerID="1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.830857 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-7msrb" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.836864 4826 generic.go:334] "Generic (PLEG): container finished" podID="8e32372c-7dea-4ff2-84d9-d49002bc57d1" containerID="cc652b9fad3ccc67c76ec6ad10f611a322df9bd22384aae32aec35d3aea80271" exitCode=0 Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.837169 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbq54" event={"ID":"8e32372c-7dea-4ff2-84d9-d49002bc57d1","Type":"ContainerDied","Data":"cc652b9fad3ccc67c76ec6ad10f611a322df9bd22384aae32aec35d3aea80271"} Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.837200 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbq54" event={"ID":"8e32372c-7dea-4ff2-84d9-d49002bc57d1","Type":"ContainerStarted","Data":"68bfbfdbe6d67fa40883f1a4ec6c0862a9f48ead6b84f6cccb2c6207e7de041d"} Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.856292 4826 scope.go:117] "RemoveContainer" containerID="227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.884105 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-7msrb"] Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.890127 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-7msrb"] Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.902916 4826 scope.go:117] "RemoveContainer" containerID="1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b" Jan 31 07:52:18 crc kubenswrapper[4826]: E0131 07:52:18.903576 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b\": container with ID starting with 1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b not found: ID does not exist" containerID="1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.903664 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b"} err="failed to get container status \"1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b\": rpc error: code = NotFound desc = could not find container \"1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b\": container with ID starting with 1a3986f3b5d5947f889476cd99175d736d19b78dac848f3758a95ecf4412611b not found: ID does not exist" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.903738 4826 scope.go:117] "RemoveContainer" containerID="227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d" Jan 31 07:52:18 crc kubenswrapper[4826]: E0131 07:52:18.904062 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d\": container with ID starting with 227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d not found: ID does not exist" containerID="227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d" Jan 31 07:52:18 crc kubenswrapper[4826]: I0131 07:52:18.904143 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d"} err="failed to get container status \"227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d\": rpc error: code = NotFound desc = could not find container \"227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d\": container with ID starting with 227cf542ad4417cc29c909ad4bdcab28ae1bc8a65322e72753fb3320afc44f1d not found: ID does not exist" Jan 31 07:52:19 crc kubenswrapper[4826]: I0131 07:52:19.892574 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-p6hcp" podUID="c589c873-9e78-4905-ab37-d49329e9c84f" containerName="ovn-controller" probeResult="failure" output=< Jan 31 07:52:19 crc kubenswrapper[4826]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 07:52:19 crc kubenswrapper[4826]: > Jan 31 07:52:19 crc kubenswrapper[4826]: I0131 07:52:19.970606 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:52:19 crc kubenswrapper[4826]: I0131 07:52:19.972639 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5wdmn" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.196444 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p6hcp-config-9l22m"] Jan 31 07:52:20 crc kubenswrapper[4826]: E0131 07:52:20.197190 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerName="dnsmasq-dns" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.197213 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerName="dnsmasq-dns" Jan 31 07:52:20 crc kubenswrapper[4826]: E0131 07:52:20.197230 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerName="init" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.197239 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerName="init" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.197430 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" containerName="dnsmasq-dns" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.205732 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.208246 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.208746 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p6hcp-config-9l22m"] Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.232155 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.351654 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n55s\" (UniqueName: \"kubernetes.io/projected/8e32372c-7dea-4ff2-84d9-d49002bc57d1-kube-api-access-7n55s\") pod \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.351847 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e32372c-7dea-4ff2-84d9-d49002bc57d1-operator-scripts\") pod \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\" (UID: \"8e32372c-7dea-4ff2-84d9-d49002bc57d1\") " Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.352137 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-additional-scripts\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.352228 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-log-ovn\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.352278 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.352310 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9kz\" (UniqueName: \"kubernetes.io/projected/a36b733b-b6d9-461a-82a2-0491b65f1f92-kube-api-access-tx9kz\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.352340 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run-ovn\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.352480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-scripts\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.353127 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e32372c-7dea-4ff2-84d9-d49002bc57d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e32372c-7dea-4ff2-84d9-d49002bc57d1" (UID: "8e32372c-7dea-4ff2-84d9-d49002bc57d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.386177 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e32372c-7dea-4ff2-84d9-d49002bc57d1-kube-api-access-7n55s" (OuterVolumeSpecName: "kube-api-access-7n55s") pod "8e32372c-7dea-4ff2-84d9-d49002bc57d1" (UID: "8e32372c-7dea-4ff2-84d9-d49002bc57d1"). InnerVolumeSpecName "kube-api-access-7n55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455545 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-log-ovn\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455626 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455658 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx9kz\" (UniqueName: \"kubernetes.io/projected/a36b733b-b6d9-461a-82a2-0491b65f1f92-kube-api-access-tx9kz\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455683 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run-ovn\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455773 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-scripts\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-additional-scripts\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455879 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e32372c-7dea-4ff2-84d9-d49002bc57d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.455894 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n55s\" (UniqueName: \"kubernetes.io/projected/8e32372c-7dea-4ff2-84d9-d49002bc57d1-kube-api-access-7n55s\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.456311 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-log-ovn\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.456399 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run-ovn\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.456392 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.456604 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-additional-scripts\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.457924 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-scripts\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.488914 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx9kz\" (UniqueName: \"kubernetes.io/projected/a36b733b-b6d9-461a-82a2-0491b65f1f92-kube-api-access-tx9kz\") pod \"ovn-controller-p6hcp-config-9l22m\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.527061 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.820870 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8eba63-06f5-46aa-abfa-d0ec1b11725f" path="/var/lib/kubelet/pods/dd8eba63-06f5-46aa-abfa-d0ec1b11725f/volumes" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.858189 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbq54" Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.858604 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbq54" event={"ID":"8e32372c-7dea-4ff2-84d9-d49002bc57d1","Type":"ContainerDied","Data":"68bfbfdbe6d67fa40883f1a4ec6c0862a9f48ead6b84f6cccb2c6207e7de041d"} Jan 31 07:52:20 crc kubenswrapper[4826]: I0131 07:52:20.858624 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68bfbfdbe6d67fa40883f1a4ec6c0862a9f48ead6b84f6cccb2c6207e7de041d" Jan 31 07:52:21 crc kubenswrapper[4826]: I0131 07:52:21.027467 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p6hcp-config-9l22m"] Jan 31 07:52:21 crc kubenswrapper[4826]: I0131 07:52:21.869149 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-9l22m" event={"ID":"a36b733b-b6d9-461a-82a2-0491b65f1f92","Type":"ContainerStarted","Data":"340b0443ff4b5d664881fb638782644327f93c51eb77874db76272f9e7886b9f"} Jan 31 07:52:21 crc kubenswrapper[4826]: I0131 07:52:21.869524 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-9l22m" event={"ID":"a36b733b-b6d9-461a-82a2-0491b65f1f92","Type":"ContainerStarted","Data":"2f5190e86b0268f33839e6b2652ae3370fc551277e715159e3927c3b51288821"} Jan 31 07:52:22 crc kubenswrapper[4826]: E0131 07:52:22.256753 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36b733b_b6d9_461a_82a2_0491b65f1f92.slice/crio-340b0443ff4b5d664881fb638782644327f93c51eb77874db76272f9e7886b9f.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:52:22 crc kubenswrapper[4826]: I0131 07:52:22.878898 4826 generic.go:334] "Generic (PLEG): container finished" podID="a36b733b-b6d9-461a-82a2-0491b65f1f92" containerID="340b0443ff4b5d664881fb638782644327f93c51eb77874db76272f9e7886b9f" exitCode=0 Jan 31 07:52:22 crc kubenswrapper[4826]: I0131 07:52:22.879047 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-9l22m" event={"ID":"a36b733b-b6d9-461a-82a2-0491b65f1f92","Type":"ContainerDied","Data":"340b0443ff4b5d664881fb638782644327f93c51eb77874db76272f9e7886b9f"} Jan 31 07:52:23 crc kubenswrapper[4826]: I0131 07:52:23.250315 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 07:52:24 crc kubenswrapper[4826]: I0131 07:52:24.884714 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-p6hcp" Jan 31 07:52:27 crc kubenswrapper[4826]: I0131 07:52:27.377030 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:52:27 crc kubenswrapper[4826]: I0131 07:52:27.377403 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.837884 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.907165 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.935891 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run\") pod \"a36b733b-b6d9-461a-82a2-0491b65f1f92\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.935951 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-log-ovn\") pod \"a36b733b-b6d9-461a-82a2-0491b65f1f92\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.935996 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx9kz\" (UniqueName: \"kubernetes.io/projected/a36b733b-b6d9-461a-82a2-0491b65f1f92-kube-api-access-tx9kz\") pod \"a36b733b-b6d9-461a-82a2-0491b65f1f92\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936045 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run-ovn\") pod \"a36b733b-b6d9-461a-82a2-0491b65f1f92\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936050 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run" (OuterVolumeSpecName: "var-run") pod "a36b733b-b6d9-461a-82a2-0491b65f1f92" (UID: "a36b733b-b6d9-461a-82a2-0491b65f1f92"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936091 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-additional-scripts\") pod \"a36b733b-b6d9-461a-82a2-0491b65f1f92\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936115 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-scripts\") pod \"a36b733b-b6d9-461a-82a2-0491b65f1f92\" (UID: \"a36b733b-b6d9-461a-82a2-0491b65f1f92\") " Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936425 4826 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936064 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a36b733b-b6d9-461a-82a2-0491b65f1f92" (UID: "a36b733b-b6d9-461a-82a2-0491b65f1f92"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936107 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a36b733b-b6d9-461a-82a2-0491b65f1f92" (UID: "a36b733b-b6d9-461a-82a2-0491b65f1f92"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.936872 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a36b733b-b6d9-461a-82a2-0491b65f1f92" (UID: "a36b733b-b6d9-461a-82a2-0491b65f1f92"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.937425 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-scripts" (OuterVolumeSpecName: "scripts") pod "a36b733b-b6d9-461a-82a2-0491b65f1f92" (UID: "a36b733b-b6d9-461a-82a2-0491b65f1f92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.942563 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36b733b-b6d9-461a-82a2-0491b65f1f92-kube-api-access-tx9kz" (OuterVolumeSpecName: "kube-api-access-tx9kz") pod "a36b733b-b6d9-461a-82a2-0491b65f1f92" (UID: "a36b733b-b6d9-461a-82a2-0491b65f1f92"). InnerVolumeSpecName "kube-api-access-tx9kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.951764 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-9l22m" event={"ID":"a36b733b-b6d9-461a-82a2-0491b65f1f92","Type":"ContainerDied","Data":"2f5190e86b0268f33839e6b2652ae3370fc551277e715159e3927c3b51288821"} Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.951809 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5190e86b0268f33839e6b2652ae3370fc551277e715159e3927c3b51288821" Jan 31 07:52:30 crc kubenswrapper[4826]: I0131 07:52:30.951831 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-9l22m" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.038310 4826 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.038358 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx9kz\" (UniqueName: \"kubernetes.io/projected/a36b733b-b6d9-461a-82a2-0491b65f1f92-kube-api-access-tx9kz\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.038367 4826 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a36b733b-b6d9-461a-82a2-0491b65f1f92-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.038376 4826 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.038386 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a36b733b-b6d9-461a-82a2-0491b65f1f92-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.047285 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.351173 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xw82h"] Jan 31 07:52:31 crc kubenswrapper[4826]: E0131 07:52:31.351768 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e32372c-7dea-4ff2-84d9-d49002bc57d1" containerName="mariadb-account-create-update" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.351783 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e32372c-7dea-4ff2-84d9-d49002bc57d1" containerName="mariadb-account-create-update" Jan 31 07:52:31 crc kubenswrapper[4826]: E0131 07:52:31.351796 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36b733b-b6d9-461a-82a2-0491b65f1f92" containerName="ovn-config" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.351802 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36b733b-b6d9-461a-82a2-0491b65f1f92" containerName="ovn-config" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.351937 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e32372c-7dea-4ff2-84d9-d49002bc57d1" containerName="mariadb-account-create-update" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.351958 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36b733b-b6d9-461a-82a2-0491b65f1f92" containerName="ovn-config" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.352488 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.358472 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0f7c-account-create-update-lr26l"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.359468 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.363188 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.364454 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f7c-account-create-update-lr26l"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.374890 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xw82h"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.444743 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34349938-3d1a-4df5-a6a2-b43beedb876f-operator-scripts\") pod \"cinder-db-create-xw82h\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.444811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg579\" (UniqueName: \"kubernetes.io/projected/850f48ed-5da5-420a-8c60-20a5af3352b1-kube-api-access-mg579\") pod \"barbican-0f7c-account-create-update-lr26l\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.445175 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsnh5\" (UniqueName: \"kubernetes.io/projected/34349938-3d1a-4df5-a6a2-b43beedb876f-kube-api-access-zsnh5\") pod \"cinder-db-create-xw82h\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.445242 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850f48ed-5da5-420a-8c60-20a5af3352b1-operator-scripts\") pod \"barbican-0f7c-account-create-update-lr26l\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.459007 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cjn6r"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.460028 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.465216 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7548-account-create-update-9m89j"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.466211 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.468246 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.504645 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7548-account-create-update-9m89j"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.518033 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cjn6r"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.547793 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9696r\" (UniqueName: \"kubernetes.io/projected/898f03be-9509-4645-b54c-bf988d058b35-kube-api-access-9696r\") pod \"cinder-7548-account-create-update-9m89j\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.547856 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-operator-scripts\") pod \"barbican-db-create-cjn6r\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.547912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/898f03be-9509-4645-b54c-bf988d058b35-operator-scripts\") pod \"cinder-7548-account-create-update-9m89j\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.547937 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsnh5\" (UniqueName: \"kubernetes.io/projected/34349938-3d1a-4df5-a6a2-b43beedb876f-kube-api-access-zsnh5\") pod \"cinder-db-create-xw82h\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.547959 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2x7p\" (UniqueName: \"kubernetes.io/projected/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-kube-api-access-q2x7p\") pod \"barbican-db-create-cjn6r\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.547992 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850f48ed-5da5-420a-8c60-20a5af3352b1-operator-scripts\") pod \"barbican-0f7c-account-create-update-lr26l\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.548037 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34349938-3d1a-4df5-a6a2-b43beedb876f-operator-scripts\") pod \"cinder-db-create-xw82h\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.548071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg579\" (UniqueName: \"kubernetes.io/projected/850f48ed-5da5-420a-8c60-20a5af3352b1-kube-api-access-mg579\") pod \"barbican-0f7c-account-create-update-lr26l\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.549129 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850f48ed-5da5-420a-8c60-20a5af3352b1-operator-scripts\") pod \"barbican-0f7c-account-create-update-lr26l\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.549427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34349938-3d1a-4df5-a6a2-b43beedb876f-operator-scripts\") pod \"cinder-db-create-xw82h\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.573566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsnh5\" (UniqueName: \"kubernetes.io/projected/34349938-3d1a-4df5-a6a2-b43beedb876f-kube-api-access-zsnh5\") pod \"cinder-db-create-xw82h\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.593668 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg579\" (UniqueName: \"kubernetes.io/projected/850f48ed-5da5-420a-8c60-20a5af3352b1-kube-api-access-mg579\") pod \"barbican-0f7c-account-create-update-lr26l\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.650002 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/898f03be-9509-4645-b54c-bf988d058b35-operator-scripts\") pod \"cinder-7548-account-create-update-9m89j\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.650049 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2x7p\" (UniqueName: \"kubernetes.io/projected/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-kube-api-access-q2x7p\") pod \"barbican-db-create-cjn6r\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.650182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9696r\" (UniqueName: \"kubernetes.io/projected/898f03be-9509-4645-b54c-bf988d058b35-kube-api-access-9696r\") pod \"cinder-7548-account-create-update-9m89j\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.650222 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-operator-scripts\") pod \"barbican-db-create-cjn6r\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.652468 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-operator-scripts\") pod \"barbican-db-create-cjn6r\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.654705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/898f03be-9509-4645-b54c-bf988d058b35-operator-scripts\") pod \"cinder-7548-account-create-update-9m89j\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.670459 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.672354 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2x7p\" (UniqueName: \"kubernetes.io/projected/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-kube-api-access-q2x7p\") pod \"barbican-db-create-cjn6r\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.672719 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9696r\" (UniqueName: \"kubernetes.io/projected/898f03be-9509-4645-b54c-bf988d058b35-kube-api-access-9696r\") pod \"cinder-7548-account-create-update-9m89j\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.685372 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.710036 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fbmh2"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.710957 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.712403 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jzds9" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.713687 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.713951 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.715543 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.726578 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9c4f-account-create-update-gccqs"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.727558 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.728844 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.732724 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fbmh2"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.752880 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9c4f-account-create-update-gccqs"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.791176 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.796197 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.801254 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-h4scr"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.802247 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.820674 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h4scr"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.853790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-config-data\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.853994 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f612133-f0fe-4418-be06-d50f6df59ea7-operator-scripts\") pod \"neutron-9c4f-account-create-update-gccqs\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.854313 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsgt\" (UniqueName: \"kubernetes.io/projected/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-kube-api-access-czsgt\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.854377 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-combined-ca-bundle\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.854861 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnxr\" (UniqueName: \"kubernetes.io/projected/7f612133-f0fe-4418-be06-d50f6df59ea7-kube-api-access-vjnxr\") pod \"neutron-9c4f-account-create-update-gccqs\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.943478 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-p6hcp-config-9l22m"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.949503 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-p6hcp-config-9l22m"] Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956692 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-combined-ca-bundle\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956757 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjnxr\" (UniqueName: \"kubernetes.io/projected/7f612133-f0fe-4418-be06-d50f6df59ea7-kube-api-access-vjnxr\") pod \"neutron-9c4f-account-create-update-gccqs\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956786 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2g68\" (UniqueName: \"kubernetes.io/projected/d319859f-f891-4be6-af4b-a067d72a9726-kube-api-access-z2g68\") pod \"neutron-db-create-h4scr\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956826 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d319859f-f891-4be6-af4b-a067d72a9726-operator-scripts\") pod \"neutron-db-create-h4scr\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-config-data\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956923 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f612133-f0fe-4418-be06-d50f6df59ea7-operator-scripts\") pod \"neutron-9c4f-account-create-update-gccqs\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.956985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsgt\" (UniqueName: \"kubernetes.io/projected/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-kube-api-access-czsgt\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.958281 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f612133-f0fe-4418-be06-d50f6df59ea7-operator-scripts\") pod \"neutron-9c4f-account-create-update-gccqs\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.964496 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-combined-ca-bundle\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.965763 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-config-data\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.971956 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjnxr\" (UniqueName: \"kubernetes.io/projected/7f612133-f0fe-4418-be06-d50f6df59ea7-kube-api-access-vjnxr\") pod \"neutron-9c4f-account-create-update-gccqs\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:31 crc kubenswrapper[4826]: I0131 07:52:31.984468 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsgt\" (UniqueName: \"kubernetes.io/projected/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-kube-api-access-czsgt\") pod \"keystone-db-sync-fbmh2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.026255 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.048259 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p6hcp-config-72cb5"] Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.049243 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.050274 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.051282 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.058016 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2g68\" (UniqueName: \"kubernetes.io/projected/d319859f-f891-4be6-af4b-a067d72a9726-kube-api-access-z2g68\") pod \"neutron-db-create-h4scr\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.058065 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d319859f-f891-4be6-af4b-a067d72a9726-operator-scripts\") pod \"neutron-db-create-h4scr\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.058928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d319859f-f891-4be6-af4b-a067d72a9726-operator-scripts\") pod \"neutron-db-create-h4scr\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.068060 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p6hcp-config-72cb5"] Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.081672 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2g68\" (UniqueName: \"kubernetes.io/projected/d319859f-f891-4be6-af4b-a067d72a9726-kube-api-access-z2g68\") pod \"neutron-db-create-h4scr\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.147940 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.159349 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-log-ovn\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.159437 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.159505 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-scripts\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.159547 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv246\" (UniqueName: \"kubernetes.io/projected/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-kube-api-access-wv246\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.159581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run-ovn\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.159598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-additional-scripts\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.265036 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv246\" (UniqueName: \"kubernetes.io/projected/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-kube-api-access-wv246\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.265406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run-ovn\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.265433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-additional-scripts\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.265490 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-log-ovn\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.265563 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.265613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-scripts\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.266582 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-additional-scripts\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.267184 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-log-ovn\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.267309 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.268034 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-scripts\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.268053 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run-ovn\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.284940 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv246\" (UniqueName: \"kubernetes.io/projected/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-kube-api-access-wv246\") pod \"ovn-controller-p6hcp-config-72cb5\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.366715 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.841596 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36b733b-b6d9-461a-82a2-0491b65f1f92" path="/var/lib/kubelet/pods/a36b733b-b6d9-461a-82a2-0491b65f1f92/volumes" Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.922197 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7548-account-create-update-9m89j"] Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.941157 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9c4f-account-create-update-gccqs"] Jan 31 07:52:32 crc kubenswrapper[4826]: W0131 07:52:32.956246 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod898f03be_9509_4645_b54c_bf988d058b35.slice/crio-71eb69c2e19c02154fe9b4e1a7c65ccb74844c8ee58c4e9d0bef2b0998f3ec4c WatchSource:0}: Error finding container 71eb69c2e19c02154fe9b4e1a7c65ccb74844c8ee58c4e9d0bef2b0998f3ec4c: Status 404 returned error can't find the container with id 71eb69c2e19c02154fe9b4e1a7c65ccb74844c8ee58c4e9d0bef2b0998f3ec4c Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.978319 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9c4f-account-create-update-gccqs" event={"ID":"7f612133-f0fe-4418-be06-d50f6df59ea7","Type":"ContainerStarted","Data":"c221274bb008125ef90e3e2736d7a4ddc5ea5247b4be1b79ec24adbdf4026824"} Jan 31 07:52:32 crc kubenswrapper[4826]: I0131 07:52:32.983576 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7548-account-create-update-9m89j" event={"ID":"898f03be-9509-4645-b54c-bf988d058b35","Type":"ContainerStarted","Data":"71eb69c2e19c02154fe9b4e1a7c65ccb74844c8ee58c4e9d0bef2b0998f3ec4c"} Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.029549 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h4scr"] Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.199378 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xw82h"] Jan 31 07:52:33 crc kubenswrapper[4826]: W0131 07:52:33.205691 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34349938_3d1a_4df5_a6a2_b43beedb876f.slice/crio-d4e6234b4d6d7f7c801fa7ec279b3866c73c2dc677bcc2e239023eeef2016ec7 WatchSource:0}: Error finding container d4e6234b4d6d7f7c801fa7ec279b3866c73c2dc677bcc2e239023eeef2016ec7: Status 404 returned error can't find the container with id d4e6234b4d6d7f7c801fa7ec279b3866c73c2dc677bcc2e239023eeef2016ec7 Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.206071 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fbmh2"] Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.213363 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cjn6r"] Jan 31 07:52:33 crc kubenswrapper[4826]: W0131 07:52:33.224190 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb83cf8ec_b8c1_4364_8dc3_e5a14e2cb65b.slice/crio-7d37533cf75ff33ba19fcb6d1953117437c2a4c547d36ba97ccf9e0a97f780de WatchSource:0}: Error finding container 7d37533cf75ff33ba19fcb6d1953117437c2a4c547d36ba97ccf9e0a97f780de: Status 404 returned error can't find the container with id 7d37533cf75ff33ba19fcb6d1953117437c2a4c547d36ba97ccf9e0a97f780de Jan 31 07:52:33 crc kubenswrapper[4826]: W0131 07:52:33.231138 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa04c1e_e73b_4ba1_b5b3_13bc4099c9e2.slice/crio-b42c790ee352f759096aedd2b237d93d37a93a2d735312b44d3542d278687357 WatchSource:0}: Error finding container b42c790ee352f759096aedd2b237d93d37a93a2d735312b44d3542d278687357: Status 404 returned error can't find the container with id b42c790ee352f759096aedd2b237d93d37a93a2d735312b44d3542d278687357 Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.403143 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p6hcp-config-72cb5"] Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.418208 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f7c-account-create-update-lr26l"] Jan 31 07:52:33 crc kubenswrapper[4826]: I0131 07:52:33.997058 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x66vb" event={"ID":"4a716e83-1782-4038-b26c-7a2d7ed6095d","Type":"ContainerStarted","Data":"dcb54ceb14638425d7c3e1bd2540c556cff55386971c0e3c2230498a22c75892"} Jan 31 07:52:34 crc kubenswrapper[4826]: I0131 07:52:34.000181 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h4scr" event={"ID":"d319859f-f891-4be6-af4b-a067d72a9726","Type":"ContainerStarted","Data":"8fd4de5a7b3e50abd06757552d91dd4e5cdb82091aade017571461bf0dfed16b"} Jan 31 07:52:34 crc kubenswrapper[4826]: I0131 07:52:34.001704 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-72cb5" event={"ID":"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67","Type":"ContainerStarted","Data":"6beb603b7da51c8a6c266fb536972f1369d119646c4390dd1f5d20a857d311f5"} Jan 31 07:52:34 crc kubenswrapper[4826]: I0131 07:52:34.002761 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cjn6r" event={"ID":"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b","Type":"ContainerStarted","Data":"7d37533cf75ff33ba19fcb6d1953117437c2a4c547d36ba97ccf9e0a97f780de"} Jan 31 07:52:34 crc kubenswrapper[4826]: I0131 07:52:34.003890 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xw82h" event={"ID":"34349938-3d1a-4df5-a6a2-b43beedb876f","Type":"ContainerStarted","Data":"d4e6234b4d6d7f7c801fa7ec279b3866c73c2dc677bcc2e239023eeef2016ec7"} Jan 31 07:52:34 crc kubenswrapper[4826]: I0131 07:52:34.005052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fbmh2" event={"ID":"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2","Type":"ContainerStarted","Data":"b42c790ee352f759096aedd2b237d93d37a93a2d735312b44d3542d278687357"} Jan 31 07:52:34 crc kubenswrapper[4826]: I0131 07:52:34.007117 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f7c-account-create-update-lr26l" event={"ID":"850f48ed-5da5-420a-8c60-20a5af3352b1","Type":"ContainerStarted","Data":"3ecd79a9f230a91e281591dcb318f0e963dfe738799d884052cfc09f5a01d5b8"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.016707 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7548-account-create-update-9m89j" event={"ID":"898f03be-9509-4645-b54c-bf988d058b35","Type":"ContainerStarted","Data":"8975f27dd1393cf0ffcb8624c6e1455bff5a5cc07d3a5146dcade2569f0192b8"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.018656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f7c-account-create-update-lr26l" event={"ID":"850f48ed-5da5-420a-8c60-20a5af3352b1","Type":"ContainerStarted","Data":"6e5160ba12607769c5fe1222398e91b6fdf4f4ed5ea3261b33a71f78e9b62d8e"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.020063 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h4scr" event={"ID":"d319859f-f891-4be6-af4b-a067d72a9726","Type":"ContainerStarted","Data":"be65d66a233cd00a88dd48466b6167ba4ac3e8397761529c95b6d2e17069f9d3"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.021567 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-72cb5" event={"ID":"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67","Type":"ContainerStarted","Data":"8fd6cad89ca037250a0de698befb6ce82813ac56da3a240b5b01e27ef26f777e"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.022859 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9c4f-account-create-update-gccqs" event={"ID":"7f612133-f0fe-4418-be06-d50f6df59ea7","Type":"ContainerStarted","Data":"ff91b9015bc6e98ca373c23c5552e404a0ac65d759c64082788acecfffb44106"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.024926 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cjn6r" event={"ID":"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b","Type":"ContainerStarted","Data":"ca2b2b1f3f77e0b002878a601fa78cc2cca5bf7dbfefae130994b2cf03675936"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.026603 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xw82h" event={"ID":"34349938-3d1a-4df5-a6a2-b43beedb876f","Type":"ContainerStarted","Data":"05e5ae902e9a3a42bf0327ca48436279bf82124a19ed3c715e48083592dc5445"} Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.037732 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7548-account-create-update-9m89j" podStartSLOduration=4.037710005 podStartE2EDuration="4.037710005s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:35.036878971 +0000 UTC m=+986.890765330" watchObservedRunningTime="2026-01-31 07:52:35.037710005 +0000 UTC m=+986.891596364" Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.076192 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-xw82h" podStartSLOduration=4.076172289 podStartE2EDuration="4.076172289s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:35.067903346 +0000 UTC m=+986.921789715" watchObservedRunningTime="2026-01-31 07:52:35.076172289 +0000 UTC m=+986.930058648" Jan 31 07:52:35 crc kubenswrapper[4826]: I0131 07:52:35.077261 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-x66vb" podStartSLOduration=3.878834646 podStartE2EDuration="21.07725332s" podCreationTimestamp="2026-01-31 07:52:14 +0000 UTC" firstStartedPulling="2026-01-31 07:52:15.043661003 +0000 UTC m=+966.897547362" lastFinishedPulling="2026-01-31 07:52:32.242079677 +0000 UTC m=+984.095966036" observedRunningTime="2026-01-31 07:52:35.05067171 +0000 UTC m=+986.904558069" watchObservedRunningTime="2026-01-31 07:52:35.07725332 +0000 UTC m=+986.931139679" Jan 31 07:52:36 crc kubenswrapper[4826]: I0131 07:52:36.068922 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-p6hcp-config-72cb5" podStartSLOduration=4.068897058 podStartE2EDuration="4.068897058s" podCreationTimestamp="2026-01-31 07:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:36.063364982 +0000 UTC m=+987.917251351" watchObservedRunningTime="2026-01-31 07:52:36.068897058 +0000 UTC m=+987.922783427" Jan 31 07:52:36 crc kubenswrapper[4826]: I0131 07:52:36.099646 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9c4f-account-create-update-gccqs" podStartSLOduration=5.099620815 podStartE2EDuration="5.099620815s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:36.092543465 +0000 UTC m=+987.946429864" watchObservedRunningTime="2026-01-31 07:52:36.099620815 +0000 UTC m=+987.953507204" Jan 31 07:52:36 crc kubenswrapper[4826]: I0131 07:52:36.116837 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0f7c-account-create-update-lr26l" podStartSLOduration=5.11681274 podStartE2EDuration="5.11681274s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:36.112422806 +0000 UTC m=+987.966309165" watchObservedRunningTime="2026-01-31 07:52:36.11681274 +0000 UTC m=+987.970699109" Jan 31 07:52:37 crc kubenswrapper[4826]: I0131 07:52:37.049247 4826 generic.go:334] "Generic (PLEG): container finished" podID="a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" containerID="8fd6cad89ca037250a0de698befb6ce82813ac56da3a240b5b01e27ef26f777e" exitCode=0 Jan 31 07:52:37 crc kubenswrapper[4826]: I0131 07:52:37.049483 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-72cb5" event={"ID":"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67","Type":"ContainerDied","Data":"8fd6cad89ca037250a0de698befb6ce82813ac56da3a240b5b01e27ef26f777e"} Jan 31 07:52:37 crc kubenswrapper[4826]: I0131 07:52:37.075558 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-cjn6r" podStartSLOduration=6.07554003 podStartE2EDuration="6.07554003s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:37.06952121 +0000 UTC m=+988.923407569" watchObservedRunningTime="2026-01-31 07:52:37.07554003 +0000 UTC m=+988.929426389" Jan 31 07:52:37 crc kubenswrapper[4826]: I0131 07:52:37.093012 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-h4scr" podStartSLOduration=6.09293568 podStartE2EDuration="6.09293568s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:37.081198559 +0000 UTC m=+988.935084918" watchObservedRunningTime="2026-01-31 07:52:37.09293568 +0000 UTC m=+988.946822039" Jan 31 07:52:38 crc kubenswrapper[4826]: I0131 07:52:38.060246 4826 generic.go:334] "Generic (PLEG): container finished" podID="d319859f-f891-4be6-af4b-a067d72a9726" containerID="be65d66a233cd00a88dd48466b6167ba4ac3e8397761529c95b6d2e17069f9d3" exitCode=0 Jan 31 07:52:38 crc kubenswrapper[4826]: I0131 07:52:38.060382 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h4scr" event={"ID":"d319859f-f891-4be6-af4b-a067d72a9726","Type":"ContainerDied","Data":"be65d66a233cd00a88dd48466b6167ba4ac3e8397761529c95b6d2e17069f9d3"} Jan 31 07:52:38 crc kubenswrapper[4826]: I0131 07:52:38.062564 4826 generic.go:334] "Generic (PLEG): container finished" podID="b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" containerID="ca2b2b1f3f77e0b002878a601fa78cc2cca5bf7dbfefae130994b2cf03675936" exitCode=0 Jan 31 07:52:38 crc kubenswrapper[4826]: I0131 07:52:38.062705 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cjn6r" event={"ID":"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b","Type":"ContainerDied","Data":"ca2b2b1f3f77e0b002878a601fa78cc2cca5bf7dbfefae130994b2cf03675936"} Jan 31 07:52:38 crc kubenswrapper[4826]: I0131 07:52:38.065295 4826 generic.go:334] "Generic (PLEG): container finished" podID="34349938-3d1a-4df5-a6a2-b43beedb876f" containerID="05e5ae902e9a3a42bf0327ca48436279bf82124a19ed3c715e48083592dc5445" exitCode=0 Jan 31 07:52:38 crc kubenswrapper[4826]: I0131 07:52:38.065419 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xw82h" event={"ID":"34349938-3d1a-4df5-a6a2-b43beedb876f","Type":"ContainerDied","Data":"05e5ae902e9a3a42bf0327ca48436279bf82124a19ed3c715e48083592dc5445"} Jan 31 07:52:39 crc kubenswrapper[4826]: I0131 07:52:39.086907 4826 generic.go:334] "Generic (PLEG): container finished" podID="898f03be-9509-4645-b54c-bf988d058b35" containerID="8975f27dd1393cf0ffcb8624c6e1455bff5a5cc07d3a5146dcade2569f0192b8" exitCode=0 Jan 31 07:52:39 crc kubenswrapper[4826]: I0131 07:52:39.086956 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7548-account-create-update-9m89j" event={"ID":"898f03be-9509-4645-b54c-bf988d058b35","Type":"ContainerDied","Data":"8975f27dd1393cf0ffcb8624c6e1455bff5a5cc07d3a5146dcade2569f0192b8"} Jan 31 07:52:39 crc kubenswrapper[4826]: I0131 07:52:39.090293 4826 generic.go:334] "Generic (PLEG): container finished" podID="850f48ed-5da5-420a-8c60-20a5af3352b1" containerID="6e5160ba12607769c5fe1222398e91b6fdf4f4ed5ea3261b33a71f78e9b62d8e" exitCode=0 Jan 31 07:52:39 crc kubenswrapper[4826]: I0131 07:52:39.090359 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f7c-account-create-update-lr26l" event={"ID":"850f48ed-5da5-420a-8c60-20a5af3352b1","Type":"ContainerDied","Data":"6e5160ba12607769c5fe1222398e91b6fdf4f4ed5ea3261b33a71f78e9b62d8e"} Jan 31 07:52:39 crc kubenswrapper[4826]: I0131 07:52:39.111007 4826 generic.go:334] "Generic (PLEG): container finished" podID="7f612133-f0fe-4418-be06-d50f6df59ea7" containerID="ff91b9015bc6e98ca373c23c5552e404a0ac65d759c64082788acecfffb44106" exitCode=0 Jan 31 07:52:39 crc kubenswrapper[4826]: I0131 07:52:39.111200 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9c4f-account-create-update-gccqs" event={"ID":"7f612133-f0fe-4418-be06-d50f6df59ea7","Type":"ContainerDied","Data":"ff91b9015bc6e98ca373c23c5552e404a0ac65d759c64082788acecfffb44106"} Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.779206 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.821913 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.850839 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.859478 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.863956 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873056 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run-ovn\") pod \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873112 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-log-ovn\") pod \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/898f03be-9509-4645-b54c-bf988d058b35-operator-scripts\") pod \"898f03be-9509-4645-b54c-bf988d058b35\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873165 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" (UID: "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" (UID: "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873234 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34349938-3d1a-4df5-a6a2-b43beedb876f-operator-scripts\") pod \"34349938-3d1a-4df5-a6a2-b43beedb876f\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873300 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run\") pod \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873329 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv246\" (UniqueName: \"kubernetes.io/projected/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-kube-api-access-wv246\") pod \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873362 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run" (OuterVolumeSpecName: "var-run") pod "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" (UID: "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873415 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-operator-scripts\") pod \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873439 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9696r\" (UniqueName: \"kubernetes.io/projected/898f03be-9509-4645-b54c-bf988d058b35-kube-api-access-9696r\") pod \"898f03be-9509-4645-b54c-bf988d058b35\" (UID: \"898f03be-9509-4645-b54c-bf988d058b35\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.873474 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-additional-scripts\") pod \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.874040 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/898f03be-9509-4645-b54c-bf988d058b35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "898f03be-9509-4645-b54c-bf988d058b35" (UID: "898f03be-9509-4645-b54c-bf988d058b35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.874122 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34349938-3d1a-4df5-a6a2-b43beedb876f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34349938-3d1a-4df5-a6a2-b43beedb876f" (UID: "34349938-3d1a-4df5-a6a2-b43beedb876f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.874128 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" (UID: "b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.875730 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" (UID: "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.876198 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f612133-f0fe-4418-be06-d50f6df59ea7-operator-scripts\") pod \"7f612133-f0fe-4418-be06-d50f6df59ea7\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.876265 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-scripts\") pod \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\" (UID: \"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.876399 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsnh5\" (UniqueName: \"kubernetes.io/projected/34349938-3d1a-4df5-a6a2-b43beedb876f-kube-api-access-zsnh5\") pod \"34349938-3d1a-4df5-a6a2-b43beedb876f\" (UID: \"34349938-3d1a-4df5-a6a2-b43beedb876f\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.876522 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjnxr\" (UniqueName: \"kubernetes.io/projected/7f612133-f0fe-4418-be06-d50f6df59ea7-kube-api-access-vjnxr\") pod \"7f612133-f0fe-4418-be06-d50f6df59ea7\" (UID: \"7f612133-f0fe-4418-be06-d50f6df59ea7\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.876879 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.877008 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2x7p\" (UniqueName: \"kubernetes.io/projected/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-kube-api-access-q2x7p\") pod \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\" (UID: \"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.877257 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f612133-f0fe-4418-be06-d50f6df59ea7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f612133-f0fe-4418-be06-d50f6df59ea7" (UID: "7f612133-f0fe-4418-be06-d50f6df59ea7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.877675 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-scripts" (OuterVolumeSpecName: "scripts") pod "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" (UID: "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878771 4826 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878797 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f612133-f0fe-4418-be06-d50f6df59ea7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878809 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878843 4826 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878860 4826 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878872 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/898f03be-9509-4645-b54c-bf988d058b35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878884 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34349938-3d1a-4df5-a6a2-b43beedb876f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878895 4826 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.878908 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.882236 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898f03be-9509-4645-b54c-bf988d058b35-kube-api-access-9696r" (OuterVolumeSpecName: "kube-api-access-9696r") pod "898f03be-9509-4645-b54c-bf988d058b35" (UID: "898f03be-9509-4645-b54c-bf988d058b35"). InnerVolumeSpecName "kube-api-access-9696r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.882276 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f612133-f0fe-4418-be06-d50f6df59ea7-kube-api-access-vjnxr" (OuterVolumeSpecName: "kube-api-access-vjnxr") pod "7f612133-f0fe-4418-be06-d50f6df59ea7" (UID: "7f612133-f0fe-4418-be06-d50f6df59ea7"). InnerVolumeSpecName "kube-api-access-vjnxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.882311 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34349938-3d1a-4df5-a6a2-b43beedb876f-kube-api-access-zsnh5" (OuterVolumeSpecName: "kube-api-access-zsnh5") pod "34349938-3d1a-4df5-a6a2-b43beedb876f" (UID: "34349938-3d1a-4df5-a6a2-b43beedb876f"). InnerVolumeSpecName "kube-api-access-zsnh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.882334 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-kube-api-access-wv246" (OuterVolumeSpecName: "kube-api-access-wv246") pod "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" (UID: "a0b0eb1f-858a-412d-bf8f-d0ec21b06f67"). InnerVolumeSpecName "kube-api-access-wv246". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.884226 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-kube-api-access-q2x7p" (OuterVolumeSpecName: "kube-api-access-q2x7p") pod "b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" (UID: "b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b"). InnerVolumeSpecName "kube-api-access-q2x7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.892633 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.979512 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2g68\" (UniqueName: \"kubernetes.io/projected/d319859f-f891-4be6-af4b-a067d72a9726-kube-api-access-z2g68\") pod \"d319859f-f891-4be6-af4b-a067d72a9726\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.979895 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg579\" (UniqueName: \"kubernetes.io/projected/850f48ed-5da5-420a-8c60-20a5af3352b1-kube-api-access-mg579\") pod \"850f48ed-5da5-420a-8c60-20a5af3352b1\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.979923 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d319859f-f891-4be6-af4b-a067d72a9726-operator-scripts\") pod \"d319859f-f891-4be6-af4b-a067d72a9726\" (UID: \"d319859f-f891-4be6-af4b-a067d72a9726\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.979987 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850f48ed-5da5-420a-8c60-20a5af3352b1-operator-scripts\") pod \"850f48ed-5da5-420a-8c60-20a5af3352b1\" (UID: \"850f48ed-5da5-420a-8c60-20a5af3352b1\") " Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.980390 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjnxr\" (UniqueName: \"kubernetes.io/projected/7f612133-f0fe-4418-be06-d50f6df59ea7-kube-api-access-vjnxr\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.980414 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2x7p\" (UniqueName: \"kubernetes.io/projected/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b-kube-api-access-q2x7p\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.980428 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv246\" (UniqueName: \"kubernetes.io/projected/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67-kube-api-access-wv246\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.980441 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9696r\" (UniqueName: \"kubernetes.io/projected/898f03be-9509-4645-b54c-bf988d058b35-kube-api-access-9696r\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.980453 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsnh5\" (UniqueName: \"kubernetes.io/projected/34349938-3d1a-4df5-a6a2-b43beedb876f-kube-api-access-zsnh5\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.980955 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850f48ed-5da5-420a-8c60-20a5af3352b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "850f48ed-5da5-420a-8c60-20a5af3352b1" (UID: "850f48ed-5da5-420a-8c60-20a5af3352b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.982508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d319859f-f891-4be6-af4b-a067d72a9726-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d319859f-f891-4be6-af4b-a067d72a9726" (UID: "d319859f-f891-4be6-af4b-a067d72a9726"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.984930 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d319859f-f891-4be6-af4b-a067d72a9726-kube-api-access-z2g68" (OuterVolumeSpecName: "kube-api-access-z2g68") pod "d319859f-f891-4be6-af4b-a067d72a9726" (UID: "d319859f-f891-4be6-af4b-a067d72a9726"). InnerVolumeSpecName "kube-api-access-z2g68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:40 crc kubenswrapper[4826]: I0131 07:52:40.985541 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850f48ed-5da5-420a-8c60-20a5af3352b1-kube-api-access-mg579" (OuterVolumeSpecName: "kube-api-access-mg579") pod "850f48ed-5da5-420a-8c60-20a5af3352b1" (UID: "850f48ed-5da5-420a-8c60-20a5af3352b1"). InnerVolumeSpecName "kube-api-access-mg579". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.081635 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2g68\" (UniqueName: \"kubernetes.io/projected/d319859f-f891-4be6-af4b-a067d72a9726-kube-api-access-z2g68\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.081678 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg579\" (UniqueName: \"kubernetes.io/projected/850f48ed-5da5-420a-8c60-20a5af3352b1-kube-api-access-mg579\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.081691 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d319859f-f891-4be6-af4b-a067d72a9726-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.081703 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850f48ed-5da5-420a-8c60-20a5af3352b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.138042 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p6hcp-config-72cb5" event={"ID":"a0b0eb1f-858a-412d-bf8f-d0ec21b06f67","Type":"ContainerDied","Data":"6beb603b7da51c8a6c266fb536972f1369d119646c4390dd1f5d20a857d311f5"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.138100 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6beb603b7da51c8a6c266fb536972f1369d119646c4390dd1f5d20a857d311f5" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.138372 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p6hcp-config-72cb5" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.144076 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9c4f-account-create-update-gccqs" event={"ID":"7f612133-f0fe-4418-be06-d50f6df59ea7","Type":"ContainerDied","Data":"c221274bb008125ef90e3e2736d7a4ddc5ea5247b4be1b79ec24adbdf4026824"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.144125 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c221274bb008125ef90e3e2736d7a4ddc5ea5247b4be1b79ec24adbdf4026824" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.144190 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9c4f-account-create-update-gccqs" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.150583 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cjn6r" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.152139 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cjn6r" event={"ID":"b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b","Type":"ContainerDied","Data":"7d37533cf75ff33ba19fcb6d1953117437c2a4c547d36ba97ccf9e0a97f780de"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.152188 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d37533cf75ff33ba19fcb6d1953117437c2a4c547d36ba97ccf9e0a97f780de" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.160584 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xw82h" event={"ID":"34349938-3d1a-4df5-a6a2-b43beedb876f","Type":"ContainerDied","Data":"d4e6234b4d6d7f7c801fa7ec279b3866c73c2dc677bcc2e239023eeef2016ec7"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.160633 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4e6234b4d6d7f7c801fa7ec279b3866c73c2dc677bcc2e239023eeef2016ec7" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.160674 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xw82h" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.171607 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fbmh2" event={"ID":"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2","Type":"ContainerStarted","Data":"33cb353f6728c82219d9bc32cca6dd875419c9f47f07aef7e2267c6986be8836"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.183126 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7548-account-create-update-9m89j" event={"ID":"898f03be-9509-4645-b54c-bf988d058b35","Type":"ContainerDied","Data":"71eb69c2e19c02154fe9b4e1a7c65ccb74844c8ee58c4e9d0bef2b0998f3ec4c"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.183186 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71eb69c2e19c02154fe9b4e1a7c65ccb74844c8ee58c4e9d0bef2b0998f3ec4c" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.183187 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7548-account-create-update-9m89j" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.184933 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f7c-account-create-update-lr26l" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.185118 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f7c-account-create-update-lr26l" event={"ID":"850f48ed-5da5-420a-8c60-20a5af3352b1","Type":"ContainerDied","Data":"3ecd79a9f230a91e281591dcb318f0e963dfe738799d884052cfc09f5a01d5b8"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.185177 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ecd79a9f230a91e281591dcb318f0e963dfe738799d884052cfc09f5a01d5b8" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.189893 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h4scr" event={"ID":"d319859f-f891-4be6-af4b-a067d72a9726","Type":"ContainerDied","Data":"8fd4de5a7b3e50abd06757552d91dd4e5cdb82091aade017571461bf0dfed16b"} Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.189943 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fd4de5a7b3e50abd06757552d91dd4e5cdb82091aade017571461bf0dfed16b" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.190037 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h4scr" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.191827 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fbmh2" podStartSLOduration=2.764461483 podStartE2EDuration="10.191809633s" podCreationTimestamp="2026-01-31 07:52:31 +0000 UTC" firstStartedPulling="2026-01-31 07:52:33.240092375 +0000 UTC m=+985.093978734" lastFinishedPulling="2026-01-31 07:52:40.667440515 +0000 UTC m=+992.521326884" observedRunningTime="2026-01-31 07:52:41.191787393 +0000 UTC m=+993.045673772" watchObservedRunningTime="2026-01-31 07:52:41.191809633 +0000 UTC m=+993.045695982" Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.887556 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-p6hcp-config-72cb5"] Jan 31 07:52:41 crc kubenswrapper[4826]: I0131 07:52:41.899902 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-p6hcp-config-72cb5"] Jan 31 07:52:42 crc kubenswrapper[4826]: I0131 07:52:42.821860 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" path="/var/lib/kubelet/pods/a0b0eb1f-858a-412d-bf8f-d0ec21b06f67/volumes" Jan 31 07:52:44 crc kubenswrapper[4826]: I0131 07:52:44.219340 4826 generic.go:334] "Generic (PLEG): container finished" podID="bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" containerID="33cb353f6728c82219d9bc32cca6dd875419c9f47f07aef7e2267c6986be8836" exitCode=0 Jan 31 07:52:44 crc kubenswrapper[4826]: I0131 07:52:44.219446 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fbmh2" event={"ID":"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2","Type":"ContainerDied","Data":"33cb353f6728c82219d9bc32cca6dd875419c9f47f07aef7e2267c6986be8836"} Jan 31 07:52:44 crc kubenswrapper[4826]: I0131 07:52:44.222935 4826 generic.go:334] "Generic (PLEG): container finished" podID="4a716e83-1782-4038-b26c-7a2d7ed6095d" containerID="dcb54ceb14638425d7c3e1bd2540c556cff55386971c0e3c2230498a22c75892" exitCode=0 Jan 31 07:52:44 crc kubenswrapper[4826]: I0131 07:52:44.223037 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x66vb" event={"ID":"4a716e83-1782-4038-b26c-7a2d7ed6095d","Type":"ContainerDied","Data":"dcb54ceb14638425d7c3e1bd2540c556cff55386971c0e3c2230498a22c75892"} Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.667501 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.673315 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.860657 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-config-data\") pod \"4a716e83-1782-4038-b26c-7a2d7ed6095d\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.861028 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-combined-ca-bundle\") pod \"4a716e83-1782-4038-b26c-7a2d7ed6095d\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.861116 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-config-data\") pod \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.861165 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czsgt\" (UniqueName: \"kubernetes.io/projected/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-kube-api-access-czsgt\") pod \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.861201 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-db-sync-config-data\") pod \"4a716e83-1782-4038-b26c-7a2d7ed6095d\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.861229 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/4a716e83-1782-4038-b26c-7a2d7ed6095d-kube-api-access-m588r\") pod \"4a716e83-1782-4038-b26c-7a2d7ed6095d\" (UID: \"4a716e83-1782-4038-b26c-7a2d7ed6095d\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.861245 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-combined-ca-bundle\") pod \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\" (UID: \"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2\") " Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.867563 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a716e83-1782-4038-b26c-7a2d7ed6095d-kube-api-access-m588r" (OuterVolumeSpecName: "kube-api-access-m588r") pod "4a716e83-1782-4038-b26c-7a2d7ed6095d" (UID: "4a716e83-1782-4038-b26c-7a2d7ed6095d"). InnerVolumeSpecName "kube-api-access-m588r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.867900 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4a716e83-1782-4038-b26c-7a2d7ed6095d" (UID: "4a716e83-1782-4038-b26c-7a2d7ed6095d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.891022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-kube-api-access-czsgt" (OuterVolumeSpecName: "kube-api-access-czsgt") pod "bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" (UID: "bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2"). InnerVolumeSpecName "kube-api-access-czsgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.905389 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a716e83-1782-4038-b26c-7a2d7ed6095d" (UID: "4a716e83-1782-4038-b26c-7a2d7ed6095d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.912146 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" (UID: "bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.933086 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-config-data" (OuterVolumeSpecName: "config-data") pod "4a716e83-1782-4038-b26c-7a2d7ed6095d" (UID: "4a716e83-1782-4038-b26c-7a2d7ed6095d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.937273 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-config-data" (OuterVolumeSpecName: "config-data") pod "bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" (UID: "bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962662 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962696 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czsgt\" (UniqueName: \"kubernetes.io/projected/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-kube-api-access-czsgt\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962708 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962719 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/4a716e83-1782-4038-b26c-7a2d7ed6095d-kube-api-access-m588r\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962729 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962737 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:45 crc kubenswrapper[4826]: I0131 07:52:45.962744 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a716e83-1782-4038-b26c-7a2d7ed6095d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.244612 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fbmh2" event={"ID":"bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2","Type":"ContainerDied","Data":"b42c790ee352f759096aedd2b237d93d37a93a2d735312b44d3542d278687357"} Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.244685 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42c790ee352f759096aedd2b237d93d37a93a2d735312b44d3542d278687357" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.244711 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fbmh2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.246838 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x66vb" event={"ID":"4a716e83-1782-4038-b26c-7a2d7ed6095d","Type":"ContainerDied","Data":"477cb2a78d9cd03b5dc1242735cc9993be84eaa81b2e2f4d3bba71705f1acb3c"} Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.246887 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477cb2a78d9cd03b5dc1242735cc9993be84eaa81b2e2f4d3bba71705f1acb3c" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.246990 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x66vb" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546132 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9vwb9"] Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546430 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898f03be-9509-4645-b54c-bf988d058b35" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546445 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="898f03be-9509-4645-b54c-bf988d058b35" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546458 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" containerName="ovn-config" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546464 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" containerName="ovn-config" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546478 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d319859f-f891-4be6-af4b-a067d72a9726" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546483 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d319859f-f891-4be6-af4b-a067d72a9726" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546498 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a716e83-1782-4038-b26c-7a2d7ed6095d" containerName="glance-db-sync" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546504 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a716e83-1782-4038-b26c-7a2d7ed6095d" containerName="glance-db-sync" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546512 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546518 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546526 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34349938-3d1a-4df5-a6a2-b43beedb876f" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546531 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34349938-3d1a-4df5-a6a2-b43beedb876f" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546542 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" containerName="keystone-db-sync" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546548 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" containerName="keystone-db-sync" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546560 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f612133-f0fe-4418-be06-d50f6df59ea7" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546566 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f612133-f0fe-4418-be06-d50f6df59ea7" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: E0131 07:52:46.546573 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850f48ed-5da5-420a-8c60-20a5af3352b1" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546578 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="850f48ed-5da5-420a-8c60-20a5af3352b1" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546821 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b0eb1f-858a-412d-bf8f-d0ec21b06f67" containerName="ovn-config" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546835 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a716e83-1782-4038-b26c-7a2d7ed6095d" containerName="glance-db-sync" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546843 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="898f03be-9509-4645-b54c-bf988d058b35" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546852 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34349938-3d1a-4df5-a6a2-b43beedb876f" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546862 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d319859f-f891-4be6-af4b-a067d72a9726" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546871 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f612133-f0fe-4418-be06-d50f6df59ea7" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546879 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="850f48ed-5da5-420a-8c60-20a5af3352b1" containerName="mariadb-account-create-update" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546907 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" containerName="keystone-db-sync" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.546919 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" containerName="mariadb-database-create" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.547456 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.550073 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.550319 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.550515 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jzds9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.550614 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.550950 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.573553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-config-data\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.573601 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-fernet-keys\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.573631 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-combined-ca-bundle\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.573660 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-scripts\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.573704 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-credential-keys\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.573727 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5v9\" (UniqueName: \"kubernetes.io/projected/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-kube-api-access-vl5v9\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.582188 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-h2ghp"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.583829 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.617547 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-h2ghp"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.635907 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9vwb9"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678464 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-scripts\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-config\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678571 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-credential-keys\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678602 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl5v9\" (UniqueName: \"kubernetes.io/projected/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-kube-api-access-vl5v9\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678644 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678699 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmv6\" (UniqueName: \"kubernetes.io/projected/df631416-65f7-4eb7-b482-031091b9d3bc-kube-api-access-pvmv6\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678730 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-config-data\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678750 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-fernet-keys\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.678806 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-combined-ca-bundle\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.692289 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-config-data\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.697469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-fernet-keys\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.708310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-scripts\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.716679 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-combined-ca-bundle\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.718801 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl5v9\" (UniqueName: \"kubernetes.io/projected/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-kube-api-access-vl5v9\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.719137 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-credential-keys\") pod \"keystone-bootstrap-9vwb9\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.726029 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d999596cf-44trz"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.727321 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.731116 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7qm77" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.731459 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.731634 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.731746 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.747458 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d999596cf-44trz"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783744 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-horizon-secret-key\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzx2\" (UniqueName: \"kubernetes.io/projected/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-kube-api-access-hzzx2\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783838 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-scripts\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783860 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-logs\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783884 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783904 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmv6\" (UniqueName: \"kubernetes.io/projected/df631416-65f7-4eb7-b482-031091b9d3bc-kube-api-access-pvmv6\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783928 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.783990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-config-data\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.784057 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-config\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.784917 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-config\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.784918 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.785407 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.786189 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.860765 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmv6\" (UniqueName: \"kubernetes.io/projected/df631416-65f7-4eb7-b482-031091b9d3bc-kube-api-access-pvmv6\") pod \"dnsmasq-dns-66fbd85b65-h2ghp\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.868195 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9tmn2"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.869384 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.873266 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cpp27" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.873512 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.873669 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.876195 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.878423 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-s6dqh"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.879337 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.884929 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-combined-ca-bundle\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-config-data\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885036 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-horizon-secret-key\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885062 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzzx2\" (UniqueName: \"kubernetes.io/projected/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-kube-api-access-hzzx2\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-scripts\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885117 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-logs\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885164 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-scripts\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-combined-ca-bundle\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885223 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cdz\" (UniqueName: \"kubernetes.io/projected/5f870e24-0e35-4ee6-805b-f81617554dc2-kube-api-access-55cdz\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885256 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f870e24-0e35-4ee6-805b-f81617554dc2-etc-machine-id\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885313 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-config-data\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-db-sync-config-data\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885404 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-db-sync-config-data\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.885426 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnldl\" (UniqueName: \"kubernetes.io/projected/d994c6b6-3dd2-4231-b80b-b83c88fa860f-kube-api-access-dnldl\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.886939 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-scripts\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.887079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-logs\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.887988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-config-data\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.889807 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s2tgd" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.892408 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.899391 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s6dqh"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.901761 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-horizon-secret-key\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.914146 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.950041 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.951924 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.953890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzzx2\" (UniqueName: \"kubernetes.io/projected/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-kube-api-access-hzzx2\") pod \"horizon-6d999596cf-44trz\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.968351 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9tmn2"] Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.969555 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.969569 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987159 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-db-sync-config-data\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987209 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnldl\" (UniqueName: \"kubernetes.io/projected/d994c6b6-3dd2-4231-b80b-b83c88fa860f-kube-api-access-dnldl\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987351 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-combined-ca-bundle\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-config-data\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987524 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-scripts\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987560 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-combined-ca-bundle\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55cdz\" (UniqueName: \"kubernetes.io/projected/5f870e24-0e35-4ee6-805b-f81617554dc2-kube-api-access-55cdz\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987724 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f870e24-0e35-4ee6-805b-f81617554dc2-etc-machine-id\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.987825 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-db-sync-config-data\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.993141 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f870e24-0e35-4ee6-805b-f81617554dc2-etc-machine-id\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.993498 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-db-sync-config-data\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.993665 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-db-sync-config-data\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:46 crc kubenswrapper[4826]: I0131 07:52:46.993901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-scripts\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:46.996269 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-combined-ca-bundle\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:46.998543 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-combined-ca-bundle\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.004908 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-h2ghp"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.024121 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.024787 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7wvpt"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.025940 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.027751 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-config-data\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.034481 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.034626 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hg7vn" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.034835 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.039728 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnldl\" (UniqueName: \"kubernetes.io/projected/d994c6b6-3dd2-4231-b80b-b83c88fa860f-kube-api-access-dnldl\") pod \"barbican-db-sync-s6dqh\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.052788 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55cdz\" (UniqueName: \"kubernetes.io/projected/5f870e24-0e35-4ee6-805b-f81617554dc2-kube-api-access-55cdz\") pod \"cinder-db-sync-9tmn2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.074041 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7wvpt"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.074696 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.083825 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j7zt4"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.092296 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095279 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-scripts\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095327 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-run-httpd\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095378 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-log-httpd\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095445 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-config-data\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095505 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jhgn\" (UniqueName: \"kubernetes.io/projected/aa070000-7471-4ca5-be06-fecc9ade01cc-kube-api-access-2jhgn\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.095568 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.181457 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j7zt4"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.195360 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196529 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-scripts\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196608 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-config-data\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196653 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-scripts\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196680 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-run-httpd\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196706 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcp88\" (UniqueName: \"kubernetes.io/projected/a88d711d-a1fe-4114-955e-167684da9ecb-kube-api-access-vcp88\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196739 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196768 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-log-httpd\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-config\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196817 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88d711d-a1fe-4114-955e-167684da9ecb-logs\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196850 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-combined-ca-bundle\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196878 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ss5\" (UniqueName: \"kubernetes.io/projected/e67f314d-5a50-4fb7-b148-fedd4dc08890-kube-api-access-b9ss5\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196908 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-config-data\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196940 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-dns-svc\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.196985 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.197022 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.197049 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jhgn\" (UniqueName: \"kubernetes.io/projected/aa070000-7471-4ca5-be06-fecc9ade01cc-kube-api-access-2jhgn\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.197629 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-log-httpd\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.198521 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-run-httpd\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.202077 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.204153 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-968cdd5c9-h62k6"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.208391 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.218091 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-968cdd5c9-h62k6"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.218827 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.227950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-config-data\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.247713 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j7zt4"] Jan 31 07:52:47 crc kubenswrapper[4826]: E0131 07:52:47.250344 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-b9ss5 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-67795cd9-j7zt4" podUID="e67f314d-5a50-4fb7-b148-fedd4dc08890" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.250678 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jhgn\" (UniqueName: \"kubernetes.io/projected/aa070000-7471-4ca5-be06-fecc9ade01cc-kube-api-access-2jhgn\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.250859 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-scripts\") pod \"ceilometer-0\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.266262 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.276435 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.278526 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.297112 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.297869 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcp88\" (UniqueName: \"kubernetes.io/projected/a88d711d-a1fe-4114-955e-167684da9ecb-kube-api-access-vcp88\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.297910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.297937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-config\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.297951 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.297984 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-config\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298001 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjthn\" (UniqueName: \"kubernetes.io/projected/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-kube-api-access-qjthn\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298021 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq579\" (UniqueName: \"kubernetes.io/projected/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-kube-api-access-zq579\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298038 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88d711d-a1fe-4114-955e-167684da9ecb-logs\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298060 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-scripts\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298077 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-combined-ca-bundle\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298093 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-logs\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-horizon-secret-key\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298127 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ss5\" (UniqueName: \"kubernetes.io/projected/e67f314d-5a50-4fb7-b148-fedd4dc08890-kube-api-access-b9ss5\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298152 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-config-data\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298169 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-dns-svc\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298202 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298219 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298259 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-scripts\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.298273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-config-data\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.301152 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88d711d-a1fe-4114-955e-167684da9ecb-logs\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.301693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-config\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.306417 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-scripts\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.306425 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-combined-ca-bundle\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.309236 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.323124 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-dns-svc\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.323665 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-config-data\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.323753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.337394 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.342343 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcp88\" (UniqueName: \"kubernetes.io/projected/a88d711d-a1fe-4114-955e-167684da9ecb-kube-api-access-vcp88\") pod \"placement-db-sync-7wvpt\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.348129 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ss5\" (UniqueName: \"kubernetes.io/projected/e67f314d-5a50-4fb7-b148-fedd4dc08890-kube-api-access-b9ss5\") pod \"dnsmasq-dns-67795cd9-j7zt4\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.359331 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zfz9q"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.360677 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7wvpt" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.360762 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.364340 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.364443 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kxg8h" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.364340 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.370299 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zfz9q"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404515 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-logs\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-horizon-secret-key\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404591 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-config\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-config-data\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404663 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-combined-ca-bundle\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404692 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404719 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404861 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-config\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404905 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmr6z\" (UniqueName: \"kubernetes.io/projected/08d490ca-1c3d-4823-8d2f-b4e2fca83778-kube-api-access-jmr6z\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404935 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjthn\" (UniqueName: \"kubernetes.io/projected/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-kube-api-access-qjthn\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.404985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq579\" (UniqueName: \"kubernetes.io/projected/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-kube-api-access-zq579\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.405027 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-scripts\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.405910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-scripts\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.406581 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-logs\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.406844 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.406889 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.407183 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.407242 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-config\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.407401 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-config-data\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.410903 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-horizon-secret-key\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.429375 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq579\" (UniqueName: \"kubernetes.io/projected/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-kube-api-access-zq579\") pod \"dnsmasq-dns-5b6dbdb6f5-kj4lk\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.429829 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjthn\" (UniqueName: \"kubernetes.io/projected/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-kube-api-access-qjthn\") pod \"horizon-968cdd5c9-h62k6\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.506833 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmr6z\" (UniqueName: \"kubernetes.io/projected/08d490ca-1c3d-4823-8d2f-b4e2fca83778-kube-api-access-jmr6z\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.506902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-config\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.506939 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-combined-ca-bundle\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.510797 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-config\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.515141 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-combined-ca-bundle\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.534932 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmr6z\" (UniqueName: \"kubernetes.io/projected/08d490ca-1c3d-4823-8d2f-b4e2fca83778-kube-api-access-jmr6z\") pod \"neutron-db-sync-zfz9q\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.542705 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.648332 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.679122 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-h2ghp"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.699731 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9vwb9"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.701776 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.894562 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d999596cf-44trz"] Jan 31 07:52:47 crc kubenswrapper[4826]: I0131 07:52:47.998597 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s6dqh"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.010504 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:52:48 crc kubenswrapper[4826]: W0131 07:52:48.011296 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa070000_7471_4ca5_be06_fecc9ade01cc.slice/crio-9db15d96000174118790939aeeb0319d12dae90660f2da384d425aa9b3bb03c9 WatchSource:0}: Error finding container 9db15d96000174118790939aeeb0319d12dae90660f2da384d425aa9b3bb03c9: Status 404 returned error can't find the container with id 9db15d96000174118790939aeeb0319d12dae90660f2da384d425aa9b3bb03c9 Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.023814 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9tmn2"] Jan 31 07:52:48 crc kubenswrapper[4826]: W0131 07:52:48.031712 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda88d711d_a1fe_4114_955e_167684da9ecb.slice/crio-7dbccca5688a3f27b052f06b29b1b83c3f399b24e29b07d4b1d3bb2f6a28ac72 WatchSource:0}: Error finding container 7dbccca5688a3f27b052f06b29b1b83c3f399b24e29b07d4b1d3bb2f6a28ac72: Status 404 returned error can't find the container with id 7dbccca5688a3f27b052f06b29b1b83c3f399b24e29b07d4b1d3bb2f6a28ac72 Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.034872 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7wvpt"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.243475 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-968cdd5c9-h62k6"] Jan 31 07:52:48 crc kubenswrapper[4826]: W0131 07:52:48.245018 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4df6e7f_5bcb_4d3b_8dd8_0e68c2407004.slice/crio-12a8579deebe89c7002b14bac4ca9d126487eead879b597d7545e60a0ee43960 WatchSource:0}: Error finding container 12a8579deebe89c7002b14bac4ca9d126487eead879b597d7545e60a0ee43960: Status 404 returned error can't find the container with id 12a8579deebe89c7002b14bac4ca9d126487eead879b597d7545e60a0ee43960 Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.253282 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.343592 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d999596cf-44trz" event={"ID":"aba8088a-cbd7-4797-aef1-cd84d8c28ff8","Type":"ContainerStarted","Data":"93bc18960a68a724e615d57e10666fc1135b9666ecd92287130ebcb58e2926fd"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.345912 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-968cdd5c9-h62k6" event={"ID":"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004","Type":"ContainerStarted","Data":"12a8579deebe89c7002b14bac4ca9d126487eead879b597d7545e60a0ee43960"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.346870 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zfz9q"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.347383 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7wvpt" event={"ID":"a88d711d-a1fe-4114-955e-167684da9ecb","Type":"ContainerStarted","Data":"7dbccca5688a3f27b052f06b29b1b83c3f399b24e29b07d4b1d3bb2f6a28ac72"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.352551 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vwb9" event={"ID":"66bcca9e-8fca-48e3-83e3-0338e35a1f8b","Type":"ContainerStarted","Data":"a802aac58031d43e1c9314c1a3f7d5d12fce71b3509027cef511a6a34e13c71b"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.352595 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vwb9" event={"ID":"66bcca9e-8fca-48e3-83e3-0338e35a1f8b","Type":"ContainerStarted","Data":"afd3b6768d6bbbda2d1f8135cb46d3a089b710a9e0852ae283d31031a1dd0ed9"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.354038 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" event={"ID":"7bb7d47c-ac09-4830-9cfe-d7042b2a5971","Type":"ContainerStarted","Data":"de900a3f751fd09f7faf943fef86e6653ea1abcadc540deac0f694ae487bcdfd"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.355731 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerStarted","Data":"9db15d96000174118790939aeeb0319d12dae90660f2da384d425aa9b3bb03c9"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.356692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9tmn2" event={"ID":"5f870e24-0e35-4ee6-805b-f81617554dc2","Type":"ContainerStarted","Data":"79e9122fbb9950ca807b58594ba6afea87e8bed2970ae1d926f339f44793c582"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.361774 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s6dqh" event={"ID":"d994c6b6-3dd2-4231-b80b-b83c88fa860f","Type":"ContainerStarted","Data":"b2aa3f88bd99909ffaa4ff692e58d150ef0d01a4ec35c6510892b7c1e0964d3f"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.364982 4826 generic.go:334] "Generic (PLEG): container finished" podID="df631416-65f7-4eb7-b482-031091b9d3bc" containerID="d4a9c8b16e4d142fa7e39d3ffcc41947367e855ca5044570811654d007007d1c" exitCode=0 Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.365238 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" event={"ID":"df631416-65f7-4eb7-b482-031091b9d3bc","Type":"ContainerDied","Data":"d4a9c8b16e4d142fa7e39d3ffcc41947367e855ca5044570811654d007007d1c"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.365319 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" event={"ID":"df631416-65f7-4eb7-b482-031091b9d3bc","Type":"ContainerStarted","Data":"3cd1b67ab69535bd1fa2b02be9794b46a75934415d6dffe1705b33b5a1bf0a02"} Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.367517 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.379633 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9vwb9" podStartSLOduration=2.379613737 podStartE2EDuration="2.379613737s" podCreationTimestamp="2026-01-31 07:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:48.373137225 +0000 UTC m=+1000.227023604" watchObservedRunningTime="2026-01-31 07:52:48.379613737 +0000 UTC m=+1000.233500096" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.384111 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.537738 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-config\") pod \"e67f314d-5a50-4fb7-b148-fedd4dc08890\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.537858 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-sb\") pod \"e67f314d-5a50-4fb7-b148-fedd4dc08890\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.537914 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-nb\") pod \"e67f314d-5a50-4fb7-b148-fedd4dc08890\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.538100 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9ss5\" (UniqueName: \"kubernetes.io/projected/e67f314d-5a50-4fb7-b148-fedd4dc08890-kube-api-access-b9ss5\") pod \"e67f314d-5a50-4fb7-b148-fedd4dc08890\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.538182 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-dns-svc\") pod \"e67f314d-5a50-4fb7-b148-fedd4dc08890\" (UID: \"e67f314d-5a50-4fb7-b148-fedd4dc08890\") " Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.538863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-config" (OuterVolumeSpecName: "config") pod "e67f314d-5a50-4fb7-b148-fedd4dc08890" (UID: "e67f314d-5a50-4fb7-b148-fedd4dc08890"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.539702 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e67f314d-5a50-4fb7-b148-fedd4dc08890" (UID: "e67f314d-5a50-4fb7-b148-fedd4dc08890"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.539721 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e67f314d-5a50-4fb7-b148-fedd4dc08890" (UID: "e67f314d-5a50-4fb7-b148-fedd4dc08890"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.540506 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e67f314d-5a50-4fb7-b148-fedd4dc08890" (UID: "e67f314d-5a50-4fb7-b148-fedd4dc08890"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.569182 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67f314d-5a50-4fb7-b148-fedd4dc08890-kube-api-access-b9ss5" (OuterVolumeSpecName: "kube-api-access-b9ss5") pod "e67f314d-5a50-4fb7-b148-fedd4dc08890" (UID: "e67f314d-5a50-4fb7-b148-fedd4dc08890"). InnerVolumeSpecName "kube-api-access-b9ss5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.644531 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.644561 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.644571 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.644581 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9ss5\" (UniqueName: \"kubernetes.io/projected/e67f314d-5a50-4fb7-b148-fedd4dc08890-kube-api-access-b9ss5\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.644590 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67f314d-5a50-4fb7-b148-fedd4dc08890-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.755556 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d999596cf-44trz"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.779601 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66cf98846c-msp76"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.782986 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.802957 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66cf98846c-msp76"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.956078 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-scripts\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.956131 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c464p\" (UniqueName: \"kubernetes.io/projected/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-kube-api-access-c464p\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.956169 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-config-data\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.956220 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-horizon-secret-key\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.956288 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-logs\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.957285 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:52:48 crc kubenswrapper[4826]: I0131 07:52:48.979098 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.061036 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-horizon-secret-key\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.061161 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-logs\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.061194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-scripts\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.061239 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c464p\" (UniqueName: \"kubernetes.io/projected/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-kube-api-access-c464p\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.061289 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-config-data\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.063219 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-config-data\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.063312 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-logs\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.065745 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-scripts\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.131234 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c464p\" (UniqueName: \"kubernetes.io/projected/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-kube-api-access-c464p\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.131344 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-horizon-secret-key\") pod \"horizon-66cf98846c-msp76\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.162611 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-dns-svc\") pod \"df631416-65f7-4eb7-b482-031091b9d3bc\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.162655 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-sb\") pod \"df631416-65f7-4eb7-b482-031091b9d3bc\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.162699 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-nb\") pod \"df631416-65f7-4eb7-b482-031091b9d3bc\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.162846 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvmv6\" (UniqueName: \"kubernetes.io/projected/df631416-65f7-4eb7-b482-031091b9d3bc-kube-api-access-pvmv6\") pod \"df631416-65f7-4eb7-b482-031091b9d3bc\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.162906 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-config\") pod \"df631416-65f7-4eb7-b482-031091b9d3bc\" (UID: \"df631416-65f7-4eb7-b482-031091b9d3bc\") " Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.167888 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df631416-65f7-4eb7-b482-031091b9d3bc-kube-api-access-pvmv6" (OuterVolumeSpecName: "kube-api-access-pvmv6") pod "df631416-65f7-4eb7-b482-031091b9d3bc" (UID: "df631416-65f7-4eb7-b482-031091b9d3bc"). InnerVolumeSpecName "kube-api-access-pvmv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.198346 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-config" (OuterVolumeSpecName: "config") pod "df631416-65f7-4eb7-b482-031091b9d3bc" (UID: "df631416-65f7-4eb7-b482-031091b9d3bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.200472 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df631416-65f7-4eb7-b482-031091b9d3bc" (UID: "df631416-65f7-4eb7-b482-031091b9d3bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.233367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df631416-65f7-4eb7-b482-031091b9d3bc" (UID: "df631416-65f7-4eb7-b482-031091b9d3bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.234883 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df631416-65f7-4eb7-b482-031091b9d3bc" (UID: "df631416-65f7-4eb7-b482-031091b9d3bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.264990 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvmv6\" (UniqueName: \"kubernetes.io/projected/df631416-65f7-4eb7-b482-031091b9d3bc-kube-api-access-pvmv6\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.265021 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.265032 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.265044 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.265052 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df631416-65f7-4eb7-b482-031091b9d3bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.276587 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.379924 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" event={"ID":"df631416-65f7-4eb7-b482-031091b9d3bc","Type":"ContainerDied","Data":"3cd1b67ab69535bd1fa2b02be9794b46a75934415d6dffe1705b33b5a1bf0a02"} Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.379948 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-h2ghp" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.379999 4826 scope.go:117] "RemoveContainer" containerID="d4a9c8b16e4d142fa7e39d3ffcc41947367e855ca5044570811654d007007d1c" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.391784 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfz9q" event={"ID":"08d490ca-1c3d-4823-8d2f-b4e2fca83778","Type":"ContainerStarted","Data":"6b98c2944e3fdc39da50df7f79ca2ff48ee1e3069a0e2528c32706cf8f89b727"} Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.391826 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfz9q" event={"ID":"08d490ca-1c3d-4823-8d2f-b4e2fca83778","Type":"ContainerStarted","Data":"4227c0d98d14e08af73c8f968f16106244dbf9055abf03406d9558dce2bf726a"} Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.395941 4826 generic.go:334] "Generic (PLEG): container finished" podID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerID="265ddb6c06c8d4329e071fe2bdd6537d27f52edd6b6f51a09ea0e1790c47e5e1" exitCode=0 Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.396232 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" event={"ID":"7bb7d47c-ac09-4830-9cfe-d7042b2a5971","Type":"ContainerDied","Data":"265ddb6c06c8d4329e071fe2bdd6537d27f52edd6b6f51a09ea0e1790c47e5e1"} Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.397745 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j7zt4" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.418682 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zfz9q" podStartSLOduration=2.418663383 podStartE2EDuration="2.418663383s" podCreationTimestamp="2026-01-31 07:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:49.416071179 +0000 UTC m=+1001.269957549" watchObservedRunningTime="2026-01-31 07:52:49.418663383 +0000 UTC m=+1001.272549732" Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.530916 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-h2ghp"] Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.538237 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-h2ghp"] Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.558013 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j7zt4"] Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.568230 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j7zt4"] Jan 31 07:52:49 crc kubenswrapper[4826]: I0131 07:52:49.791891 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66cf98846c-msp76"] Jan 31 07:52:49 crc kubenswrapper[4826]: W0131 07:52:49.800947 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6849a0_3041_4a33_8ed4_3c2e1cfe910b.slice/crio-1daad003c830381b9a3273e8be52b8ca4424a4a2c2b2ef31fcbf974b45469f31 WatchSource:0}: Error finding container 1daad003c830381b9a3273e8be52b8ca4424a4a2c2b2ef31fcbf974b45469f31: Status 404 returned error can't find the container with id 1daad003c830381b9a3273e8be52b8ca4424a4a2c2b2ef31fcbf974b45469f31 Jan 31 07:52:50 crc kubenswrapper[4826]: I0131 07:52:50.408170 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66cf98846c-msp76" event={"ID":"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b","Type":"ContainerStarted","Data":"1daad003c830381b9a3273e8be52b8ca4424a4a2c2b2ef31fcbf974b45469f31"} Jan 31 07:52:50 crc kubenswrapper[4826]: I0131 07:52:50.410789 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" event={"ID":"7bb7d47c-ac09-4830-9cfe-d7042b2a5971","Type":"ContainerStarted","Data":"43e9078d02a053ded7991e041ff4a9e5716edf6586f9e136da7e0989f2369d97"} Jan 31 07:52:50 crc kubenswrapper[4826]: I0131 07:52:50.411013 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:50 crc kubenswrapper[4826]: I0131 07:52:50.441935 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" podStartSLOduration=3.441915773 podStartE2EDuration="3.441915773s" podCreationTimestamp="2026-01-31 07:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:52:50.434657198 +0000 UTC m=+1002.288543577" watchObservedRunningTime="2026-01-31 07:52:50.441915773 +0000 UTC m=+1002.295802122" Jan 31 07:52:50 crc kubenswrapper[4826]: I0131 07:52:50.836287 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df631416-65f7-4eb7-b482-031091b9d3bc" path="/var/lib/kubelet/pods/df631416-65f7-4eb7-b482-031091b9d3bc/volumes" Jan 31 07:52:50 crc kubenswrapper[4826]: I0131 07:52:50.836905 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67f314d-5a50-4fb7-b148-fedd4dc08890" path="/var/lib/kubelet/pods/e67f314d-5a50-4fb7-b148-fedd4dc08890/volumes" Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.376781 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.377374 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.377418 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.378104 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c93a1dd8075ab41380245ff46508ce96eef07210ffe502888ff235cf0e5e7fc4"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.378150 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://c93a1dd8075ab41380245ff46508ce96eef07210ffe502888ff235cf0e5e7fc4" gracePeriod=600 Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.505603 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="c93a1dd8075ab41380245ff46508ce96eef07210ffe502888ff235cf0e5e7fc4" exitCode=0 Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.505653 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"c93a1dd8075ab41380245ff46508ce96eef07210ffe502888ff235cf0e5e7fc4"} Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.505691 4826 scope.go:117] "RemoveContainer" containerID="9f90fb78fab9497ee3e1bd264894acb4bbed634bf52113ea4ba5640cbade7719" Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.651403 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.748227 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-48nx2"] Jan 31 07:52:57 crc kubenswrapper[4826]: I0131 07:52:57.748474 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" containerID="cri-o://49c43005e37ee6e1066f09aa88a17e23bf4055b3c6cab8884b1861455bd0db7e" gracePeriod=10 Jan 31 07:52:58 crc kubenswrapper[4826]: I0131 07:52:58.514573 4826 generic.go:334] "Generic (PLEG): container finished" podID="66bcca9e-8fca-48e3-83e3-0338e35a1f8b" containerID="a802aac58031d43e1c9314c1a3f7d5d12fce71b3509027cef511a6a34e13c71b" exitCode=0 Jan 31 07:52:58 crc kubenswrapper[4826]: I0131 07:52:58.514757 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vwb9" event={"ID":"66bcca9e-8fca-48e3-83e3-0338e35a1f8b","Type":"ContainerDied","Data":"a802aac58031d43e1c9314c1a3f7d5d12fce71b3509027cef511a6a34e13c71b"} Jan 31 07:52:58 crc kubenswrapper[4826]: I0131 07:52:58.518402 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerID="49c43005e37ee6e1066f09aa88a17e23bf4055b3c6cab8884b1861455bd0db7e" exitCode=0 Jan 31 07:52:58 crc kubenswrapper[4826]: I0131 07:52:58.518495 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-48nx2" event={"ID":"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e","Type":"ContainerDied","Data":"49c43005e37ee6e1066f09aa88a17e23bf4055b3c6cab8884b1861455bd0db7e"} Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.676363 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-968cdd5c9-h62k6"] Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.723895 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65776456b6-g6gkf"] Jan 31 07:52:59 crc kubenswrapper[4826]: E0131 07:52:59.724350 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df631416-65f7-4eb7-b482-031091b9d3bc" containerName="init" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.724371 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="df631416-65f7-4eb7-b482-031091b9d3bc" containerName="init" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.724519 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="df631416-65f7-4eb7-b482-031091b9d3bc" containerName="init" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.725663 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.734412 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.747902 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65776456b6-g6gkf"] Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.775915 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66cf98846c-msp76"] Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.795406 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-97cdc8cb-tdpkc"] Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.796920 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.809217 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97cdc8cb-tdpkc"] Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812395 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-logs\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812443 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwrd\" (UniqueName: \"kubernetes.io/projected/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-kube-api-access-5qwrd\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812466 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-tls-certs\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812502 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-horizon-tls-certs\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812525 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-secret-key\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812542 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-combined-ca-bundle\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812594 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0626292-98a9-4e1f-8359-f734ed8a3118-config-data\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812613 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-config-data\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812643 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-combined-ca-bundle\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812660 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-scripts\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812699 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-horizon-secret-key\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812721 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2f9j\" (UniqueName: \"kubernetes.io/projected/e0626292-98a9-4e1f-8359-f734ed8a3118-kube-api-access-f2f9j\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812753 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0626292-98a9-4e1f-8359-f734ed8a3118-scripts\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.812774 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0626292-98a9-4e1f-8359-f734ed8a3118-logs\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0626292-98a9-4e1f-8359-f734ed8a3118-config-data\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-config-data\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914873 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-combined-ca-bundle\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914891 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-scripts\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914925 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-horizon-secret-key\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914945 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2f9j\" (UniqueName: \"kubernetes.io/projected/e0626292-98a9-4e1f-8359-f734ed8a3118-kube-api-access-f2f9j\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0626292-98a9-4e1f-8359-f734ed8a3118-scripts\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.914999 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0626292-98a9-4e1f-8359-f734ed8a3118-logs\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.915030 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-logs\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.915045 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwrd\" (UniqueName: \"kubernetes.io/projected/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-kube-api-access-5qwrd\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.915062 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-tls-certs\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.915092 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-horizon-tls-certs\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.915116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-secret-key\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.915130 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-combined-ca-bundle\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.916999 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0626292-98a9-4e1f-8359-f734ed8a3118-scripts\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.918385 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0626292-98a9-4e1f-8359-f734ed8a3118-config-data\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.919661 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-config-data\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.923059 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-combined-ca-bundle\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.923481 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-scripts\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.927881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-horizon-secret-key\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.928590 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-logs\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.929130 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-combined-ca-bundle\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.930727 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0626292-98a9-4e1f-8359-f734ed8a3118-logs\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.936269 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-secret-key\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.940283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-tls-certs\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.948475 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwrd\" (UniqueName: \"kubernetes.io/projected/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-kube-api-access-5qwrd\") pod \"horizon-65776456b6-g6gkf\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.951121 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0626292-98a9-4e1f-8359-f734ed8a3118-horizon-tls-certs\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:52:59 crc kubenswrapper[4826]: I0131 07:52:59.953508 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2f9j\" (UniqueName: \"kubernetes.io/projected/e0626292-98a9-4e1f-8359-f734ed8a3118-kube-api-access-f2f9j\") pod \"horizon-97cdc8cb-tdpkc\" (UID: \"e0626292-98a9-4e1f-8359-f734ed8a3118\") " pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:53:00 crc kubenswrapper[4826]: I0131 07:53:00.057745 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:53:00 crc kubenswrapper[4826]: I0131 07:53:00.134107 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:53:02 crc kubenswrapper[4826]: I0131 07:53:02.405320 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:53:02 crc kubenswrapper[4826]: E0131 07:53:02.667263 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 07:53:02 crc kubenswrapper[4826]: E0131 07:53:02.667473 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57ch95h5d4h68bh587h7h658h66bhcfh5ch597h86h5fdh5f7h679h696h57fh658h5fdhd5hdch87h5bfhbbh5cfh95h5ch54fhb4h6fhffh599q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c464p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66cf98846c-msp76_openstack(6d6849a0-3041-4a33-8ed4-3c2e1cfe910b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:53:02 crc kubenswrapper[4826]: E0131 07:53:02.669747 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66cf98846c-msp76" podUID="6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" Jan 31 07:53:07 crc kubenswrapper[4826]: I0131 07:53:07.405802 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:53:12 crc kubenswrapper[4826]: I0131 07:53:12.405882 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:53:12 crc kubenswrapper[4826]: I0131 07:53:12.406851 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.404603 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.767688 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.856555 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-logs\") pod \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.856611 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-config-data\") pod \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.856634 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-horizon-secret-key\") pod \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.856686 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-scripts\") pod \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.856730 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c464p\" (UniqueName: \"kubernetes.io/projected/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-kube-api-access-c464p\") pod \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\" (UID: \"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b\") " Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.857415 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-scripts" (OuterVolumeSpecName: "scripts") pod "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" (UID: "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.856941 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-logs" (OuterVolumeSpecName: "logs") pod "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" (UID: "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.858092 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-config-data" (OuterVolumeSpecName: "config-data") pod "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" (UID: "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.862264 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-kube-api-access-c464p" (OuterVolumeSpecName: "kube-api-access-c464p") pod "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" (UID: "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b"). InnerVolumeSpecName "kube-api-access-c464p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.862535 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" (UID: "6d6849a0-3041-4a33-8ed4-3c2e1cfe910b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.959234 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.959281 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.959293 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.959302 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:17 crc kubenswrapper[4826]: I0131 07:53:17.959313 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c464p\" (UniqueName: \"kubernetes.io/projected/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b-kube-api-access-c464p\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.300271 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.300419 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnldl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-s6dqh_openstack(d994c6b6-3dd2-4231-b80b-b83c88fa860f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.301574 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-s6dqh" podUID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.357313 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.357489 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8h5dchch9bh658hc8h5dch76h5bfhb4h88h68ch98h569h7dh588hb7h56dhf8h5cbhch84h9bh584h65hb7h9fh564h89h56dh557h5d9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjthn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-968cdd5c9-h62k6_openstack(e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.359914 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-968cdd5c9-h62k6" podUID="e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.417806 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.418004 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n549hbfhb4h5d5h68bh56fh589h675h597h5d5h5c9h594hddh6dh598h5f6h598h5ddh5c8h64h588h5b8h56fh574h8fhfbh646h64fhfch5c5h66fhd6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzzx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d999596cf-44trz_openstack(aba8088a-cbd7-4797-aef1-cd84d8c28ff8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.420821 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6d999596cf-44trz" podUID="aba8088a-cbd7-4797-aef1-cd84d8c28ff8" Jan 31 07:53:18 crc kubenswrapper[4826]: I0131 07:53:18.692651 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cf98846c-msp76" Jan 31 07:53:18 crc kubenswrapper[4826]: I0131 07:53:18.692662 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66cf98846c-msp76" event={"ID":"6d6849a0-3041-4a33-8ed4-3c2e1cfe910b","Type":"ContainerDied","Data":"1daad003c830381b9a3273e8be52b8ca4424a4a2c2b2ef31fcbf974b45469f31"} Jan 31 07:53:18 crc kubenswrapper[4826]: E0131 07:53:18.696828 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-s6dqh" podUID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" Jan 31 07:53:18 crc kubenswrapper[4826]: I0131 07:53:18.818990 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66cf98846c-msp76"] Jan 31 07:53:18 crc kubenswrapper[4826]: I0131 07:53:18.819383 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66cf98846c-msp76"] Jan 31 07:53:20 crc kubenswrapper[4826]: I0131 07:53:20.821405 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6849a0-3041-4a33-8ed4-3c2e1cfe910b" path="/var/lib/kubelet/pods/6d6849a0-3041-4a33-8ed4-3c2e1cfe910b/volumes" Jan 31 07:53:22 crc kubenswrapper[4826]: I0131 07:53:22.404683 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:53:27 crc kubenswrapper[4826]: E0131 07:53:27.096650 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 07:53:27 crc kubenswrapper[4826]: E0131 07:53:27.097626 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55cdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9tmn2_openstack(5f870e24-0e35-4ee6-805b-f81617554dc2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:53:27 crc kubenswrapper[4826]: E0131 07:53:27.098876 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9tmn2" podUID="5f870e24-0e35-4ee6-805b-f81617554dc2" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.119914 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.126941 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215177 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-scripts\") pod \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215219 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-horizon-secret-key\") pod \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215266 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-config-data\") pod \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215313 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-config-data\") pod \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215333 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl5v9\" (UniqueName: \"kubernetes.io/projected/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-kube-api-access-vl5v9\") pod \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215357 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-combined-ca-bundle\") pod \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215378 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-fernet-keys\") pod \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215399 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-scripts\") pod \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215423 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-logs\") pod \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215448 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-credential-keys\") pod \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\" (UID: \"66bcca9e-8fca-48e3-83e3-0338e35a1f8b\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215485 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjthn\" (UniqueName: \"kubernetes.io/projected/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-kube-api-access-qjthn\") pod \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\" (UID: \"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004\") " Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215840 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-scripts" (OuterVolumeSpecName: "scripts") pod "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" (UID: "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.215936 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-config-data" (OuterVolumeSpecName: "config-data") pod "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" (UID: "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.216348 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-logs" (OuterVolumeSpecName: "logs") pod "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" (UID: "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.221204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "66bcca9e-8fca-48e3-83e3-0338e35a1f8b" (UID: "66bcca9e-8fca-48e3-83e3-0338e35a1f8b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.221767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-kube-api-access-vl5v9" (OuterVolumeSpecName: "kube-api-access-vl5v9") pod "66bcca9e-8fca-48e3-83e3-0338e35a1f8b" (UID: "66bcca9e-8fca-48e3-83e3-0338e35a1f8b"). InnerVolumeSpecName "kube-api-access-vl5v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.225184 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "66bcca9e-8fca-48e3-83e3-0338e35a1f8b" (UID: "66bcca9e-8fca-48e3-83e3-0338e35a1f8b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.225877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-kube-api-access-qjthn" (OuterVolumeSpecName: "kube-api-access-qjthn") pod "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" (UID: "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004"). InnerVolumeSpecName "kube-api-access-qjthn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.226391 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" (UID: "e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.226418 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-scripts" (OuterVolumeSpecName: "scripts") pod "66bcca9e-8fca-48e3-83e3-0338e35a1f8b" (UID: "66bcca9e-8fca-48e3-83e3-0338e35a1f8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.245214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-config-data" (OuterVolumeSpecName: "config-data") pod "66bcca9e-8fca-48e3-83e3-0338e35a1f8b" (UID: "66bcca9e-8fca-48e3-83e3-0338e35a1f8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.249245 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66bcca9e-8fca-48e3-83e3-0338e35a1f8b" (UID: "66bcca9e-8fca-48e3-83e3-0338e35a1f8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317561 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317602 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317616 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317624 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317634 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl5v9\" (UniqueName: \"kubernetes.io/projected/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-kube-api-access-vl5v9\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317643 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317651 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317658 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317666 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317675 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66bcca9e-8fca-48e3-83e3-0338e35a1f8b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.317683 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjthn\" (UniqueName: \"kubernetes.io/projected/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004-kube-api-access-qjthn\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.404870 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-48nx2" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.770491 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vwb9" event={"ID":"66bcca9e-8fca-48e3-83e3-0338e35a1f8b","Type":"ContainerDied","Data":"afd3b6768d6bbbda2d1f8135cb46d3a089b710a9e0852ae283d31031a1dd0ed9"} Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.770548 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd3b6768d6bbbda2d1f8135cb46d3a089b710a9e0852ae283d31031a1dd0ed9" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.770517 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vwb9" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.772474 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-968cdd5c9-h62k6" event={"ID":"e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004","Type":"ContainerDied","Data":"12a8579deebe89c7002b14bac4ca9d126487eead879b597d7545e60a0ee43960"} Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.772494 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-968cdd5c9-h62k6" Jan 31 07:53:27 crc kubenswrapper[4826]: E0131 07:53:27.781579 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9tmn2" podUID="5f870e24-0e35-4ee6-805b-f81617554dc2" Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.838300 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-968cdd5c9-h62k6"] Jan 31 07:53:27 crc kubenswrapper[4826]: I0131 07:53:27.847154 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-968cdd5c9-h62k6"] Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.210692 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9vwb9"] Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.216937 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9vwb9"] Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.332770 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4swtq"] Jan 31 07:53:28 crc kubenswrapper[4826]: E0131 07:53:28.333210 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66bcca9e-8fca-48e3-83e3-0338e35a1f8b" containerName="keystone-bootstrap" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.333230 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="66bcca9e-8fca-48e3-83e3-0338e35a1f8b" containerName="keystone-bootstrap" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.333377 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="66bcca9e-8fca-48e3-83e3-0338e35a1f8b" containerName="keystone-bootstrap" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.333929 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.337095 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.337571 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jzds9" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.338040 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.338924 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.339023 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.346957 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4swtq"] Jan 31 07:53:28 crc kubenswrapper[4826]: E0131 07:53:28.373744 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 31 07:53:28 crc kubenswrapper[4826]: E0131 07:53:28.373894 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcp88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7wvpt_openstack(a88d711d-a1fe-4114-955e-167684da9ecb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 07:53:28 crc kubenswrapper[4826]: E0131 07:53:28.375521 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7wvpt" podUID="a88d711d-a1fe-4114-955e-167684da9ecb" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.437047 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65pb\" (UniqueName: \"kubernetes.io/projected/8cc9cec2-3064-4a80-8621-f84c37994a96-kube-api-access-w65pb\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.437494 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-config-data\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.437527 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-scripts\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.437599 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-fernet-keys\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.437657 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-credential-keys\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.437712 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-combined-ca-bundle\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.514419 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.539263 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65pb\" (UniqueName: \"kubernetes.io/projected/8cc9cec2-3064-4a80-8621-f84c37994a96-kube-api-access-w65pb\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.539327 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-config-data\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.539350 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-scripts\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.539407 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-fernet-keys\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.539447 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-credential-keys\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.539482 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-combined-ca-bundle\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.547897 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-config-data\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.548718 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-fernet-keys\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.551826 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-combined-ca-bundle\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.555029 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-credential-keys\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.556593 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-scripts\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.562451 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65pb\" (UniqueName: \"kubernetes.io/projected/8cc9cec2-3064-4a80-8621-f84c37994a96-kube-api-access-w65pb\") pod \"keystone-bootstrap-4swtq\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.640413 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-config-data\") pod \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.640518 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-horizon-secret-key\") pod \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.640589 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzzx2\" (UniqueName: \"kubernetes.io/projected/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-kube-api-access-hzzx2\") pod \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.640675 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-logs\") pod \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.640755 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-scripts\") pod \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\" (UID: \"aba8088a-cbd7-4797-aef1-cd84d8c28ff8\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.641150 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-logs" (OuterVolumeSpecName: "logs") pod "aba8088a-cbd7-4797-aef1-cd84d8c28ff8" (UID: "aba8088a-cbd7-4797-aef1-cd84d8c28ff8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.641597 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-scripts" (OuterVolumeSpecName: "scripts") pod "aba8088a-cbd7-4797-aef1-cd84d8c28ff8" (UID: "aba8088a-cbd7-4797-aef1-cd84d8c28ff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.641794 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.641818 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.642239 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-config-data" (OuterVolumeSpecName: "config-data") pod "aba8088a-cbd7-4797-aef1-cd84d8c28ff8" (UID: "aba8088a-cbd7-4797-aef1-cd84d8c28ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.644181 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-kube-api-access-hzzx2" (OuterVolumeSpecName: "kube-api-access-hzzx2") pod "aba8088a-cbd7-4797-aef1-cd84d8c28ff8" (UID: "aba8088a-cbd7-4797-aef1-cd84d8c28ff8"). InnerVolumeSpecName "kube-api-access-hzzx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.644360 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aba8088a-cbd7-4797-aef1-cd84d8c28ff8" (UID: "aba8088a-cbd7-4797-aef1-cd84d8c28ff8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.656234 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.681543 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.748127 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.748150 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.748164 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzzx2\" (UniqueName: \"kubernetes.io/projected/aba8088a-cbd7-4797-aef1-cd84d8c28ff8-kube-api-access-hzzx2\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.803286 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"316b3d553b5671d98b8431682183e3a4f3aaa9a7a42b0254a3052bbf98543c03"} Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.805277 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d999596cf-44trz" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.808582 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d999596cf-44trz" event={"ID":"aba8088a-cbd7-4797-aef1-cd84d8c28ff8","Type":"ContainerDied","Data":"93bc18960a68a724e615d57e10666fc1135b9666ecd92287130ebcb58e2926fd"} Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.839483 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66bcca9e-8fca-48e3-83e3-0338e35a1f8b" path="/var/lib/kubelet/pods/66bcca9e-8fca-48e3-83e3-0338e35a1f8b/volumes" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.840157 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004" path="/var/lib/kubelet/pods/e4df6e7f-5bcb-4d3b-8dd8-0e68c2407004/volumes" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.844806 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-48nx2" event={"ID":"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e","Type":"ContainerDied","Data":"d382a56a9ebdc633f1e8b7689aec4d71c16daed13c3b79e10b797fdc0aeefbc8"} Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.845081 4826 scope.go:117] "RemoveContainer" containerID="49c43005e37ee6e1066f09aa88a17e23bf4055b3c6cab8884b1861455bd0db7e" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.849128 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-48nx2" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.850052 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-nb\") pod \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.850136 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mww4x\" (UniqueName: \"kubernetes.io/projected/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-kube-api-access-mww4x\") pod \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.850206 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-dns-svc\") pod \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.850231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-sb\") pod \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.850383 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-config\") pod \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\" (UID: \"f4a66b21-7551-4e7f-87dc-81bbb08f3c3e\") " Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.862213 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-kube-api-access-mww4x" (OuterVolumeSpecName: "kube-api-access-mww4x") pod "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" (UID: "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e"). InnerVolumeSpecName "kube-api-access-mww4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.866514 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerStarted","Data":"d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22"} Jan 31 07:53:28 crc kubenswrapper[4826]: E0131 07:53:28.868162 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-7wvpt" podUID="a88d711d-a1fe-4114-955e-167684da9ecb" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.888315 4826 scope.go:117] "RemoveContainer" containerID="3a37e83328b74a9d37b99f4969e9b50b4162ebc76fc5fdf14b8fabcfd6a2d426" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.912568 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" (UID: "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.930313 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" (UID: "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.931073 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" (UID: "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.937224 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-config" (OuterVolumeSpecName: "config") pod "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" (UID: "f4a66b21-7551-4e7f-87dc-81bbb08f3c3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.943004 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65776456b6-g6gkf"] Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.951651 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d999596cf-44trz"] Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.953889 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.953910 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.953925 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mww4x\" (UniqueName: \"kubernetes.io/projected/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-kube-api-access-mww4x\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.953937 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.953949 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.959024 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d999596cf-44trz"] Jan 31 07:53:28 crc kubenswrapper[4826]: I0131 07:53:28.966720 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97cdc8cb-tdpkc"] Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.197462 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-48nx2"] Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.204520 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-48nx2"] Jan 31 07:53:29 crc kubenswrapper[4826]: W0131 07:53:29.234942 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cc9cec2_3064_4a80_8621_f84c37994a96.slice/crio-c9494f3fd1185484577d8051eb3dad7a21b22d2384d5257450d54d0655f28f10 WatchSource:0}: Error finding container c9494f3fd1185484577d8051eb3dad7a21b22d2384d5257450d54d0655f28f10: Status 404 returned error can't find the container with id c9494f3fd1185484577d8051eb3dad7a21b22d2384d5257450d54d0655f28f10 Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.236294 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4swtq"] Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.891180 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4swtq" event={"ID":"8cc9cec2-3064-4a80-8621-f84c37994a96","Type":"ContainerStarted","Data":"e399ecb8bb0c54125335d07b85223e893ce5cb5bd269a44baa2826ddae9db5dc"} Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.891656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4swtq" event={"ID":"8cc9cec2-3064-4a80-8621-f84c37994a96","Type":"ContainerStarted","Data":"c9494f3fd1185484577d8051eb3dad7a21b22d2384d5257450d54d0655f28f10"} Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.893682 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97cdc8cb-tdpkc" event={"ID":"e0626292-98a9-4e1f-8359-f734ed8a3118","Type":"ContainerStarted","Data":"7d7e7d754bd352ed6d8426f6bac745e6492b2e8a676948324b46c2316257dd4c"} Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.893706 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97cdc8cb-tdpkc" event={"ID":"e0626292-98a9-4e1f-8359-f734ed8a3118","Type":"ContainerStarted","Data":"0b2d7251ea78e7791d61b9c0261b63243dd5c1d5a8fcda276aad860465638585"} Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.895435 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65776456b6-g6gkf" event={"ID":"e4282d7e-76a0-493b-b2c6-0d954d4bed7a","Type":"ContainerStarted","Data":"f2158499c51fafb01f7ae88f36a47962060ab8785eed124c0b4a9a9cc6254f3c"} Jan 31 07:53:29 crc kubenswrapper[4826]: I0131 07:53:29.911118 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4swtq" podStartSLOduration=1.91110019 podStartE2EDuration="1.91110019s" podCreationTimestamp="2026-01-31 07:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:29.905673317 +0000 UTC m=+1041.759559676" watchObservedRunningTime="2026-01-31 07:53:29.91110019 +0000 UTC m=+1041.764986559" Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.821684 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba8088a-cbd7-4797-aef1-cd84d8c28ff8" path="/var/lib/kubelet/pods/aba8088a-cbd7-4797-aef1-cd84d8c28ff8/volumes" Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.822934 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" path="/var/lib/kubelet/pods/f4a66b21-7551-4e7f-87dc-81bbb08f3c3e/volumes" Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.908517 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerStarted","Data":"17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c"} Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.910680 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65776456b6-g6gkf" event={"ID":"e4282d7e-76a0-493b-b2c6-0d954d4bed7a","Type":"ContainerStarted","Data":"dddfa3da3163de69516aa5a255f94e47556a43a50db12dac45b8f563367d6e90"} Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.910711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65776456b6-g6gkf" event={"ID":"e4282d7e-76a0-493b-b2c6-0d954d4bed7a","Type":"ContainerStarted","Data":"9464619301db5a0a09586180fbf9f2778935545892902aad640d046b21fb8b23"} Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.916339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s6dqh" event={"ID":"d994c6b6-3dd2-4231-b80b-b83c88fa860f","Type":"ContainerStarted","Data":"88f75ce2f7fe85a623b4c0ac927e45c9c8cebe12719557f5d69e1d9ddd4aeb7f"} Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.921118 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97cdc8cb-tdpkc" event={"ID":"e0626292-98a9-4e1f-8359-f734ed8a3118","Type":"ContainerStarted","Data":"4cfca45b39efab550f7e3f43edbbd9216b229d200febd57f6a1eb8b96f243aa7"} Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.946305 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65776456b6-g6gkf" podStartSLOduration=31.418462681 podStartE2EDuration="31.946279517s" podCreationTimestamp="2026-01-31 07:52:59 +0000 UTC" firstStartedPulling="2026-01-31 07:53:28.942563654 +0000 UTC m=+1040.796450013" lastFinishedPulling="2026-01-31 07:53:29.47038049 +0000 UTC m=+1041.324266849" observedRunningTime="2026-01-31 07:53:30.942003266 +0000 UTC m=+1042.795889645" watchObservedRunningTime="2026-01-31 07:53:30.946279517 +0000 UTC m=+1042.800165876" Jan 31 07:53:30 crc kubenswrapper[4826]: I0131 07:53:30.969382 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-97cdc8cb-tdpkc" podStartSLOduration=31.447811227 podStartE2EDuration="31.969361077s" podCreationTimestamp="2026-01-31 07:52:59 +0000 UTC" firstStartedPulling="2026-01-31 07:53:28.946153015 +0000 UTC m=+1040.800039374" lastFinishedPulling="2026-01-31 07:53:29.467702865 +0000 UTC m=+1041.321589224" observedRunningTime="2026-01-31 07:53:30.963857762 +0000 UTC m=+1042.817744121" watchObservedRunningTime="2026-01-31 07:53:30.969361077 +0000 UTC m=+1042.823247436" Jan 31 07:53:35 crc kubenswrapper[4826]: I0131 07:53:35.970316 4826 generic.go:334] "Generic (PLEG): container finished" podID="8cc9cec2-3064-4a80-8621-f84c37994a96" containerID="e399ecb8bb0c54125335d07b85223e893ce5cb5bd269a44baa2826ddae9db5dc" exitCode=0 Jan 31 07:53:35 crc kubenswrapper[4826]: I0131 07:53:35.970789 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4swtq" event={"ID":"8cc9cec2-3064-4a80-8621-f84c37994a96","Type":"ContainerDied","Data":"e399ecb8bb0c54125335d07b85223e893ce5cb5bd269a44baa2826ddae9db5dc"} Jan 31 07:53:35 crc kubenswrapper[4826]: I0131 07:53:35.990338 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-s6dqh" podStartSLOduration=7.464086608 podStartE2EDuration="49.990321837s" podCreationTimestamp="2026-01-31 07:52:46 +0000 UTC" firstStartedPulling="2026-01-31 07:52:48.003575072 +0000 UTC m=+999.857461431" lastFinishedPulling="2026-01-31 07:53:30.529810301 +0000 UTC m=+1042.383696660" observedRunningTime="2026-01-31 07:53:30.986796909 +0000 UTC m=+1042.840683288" watchObservedRunningTime="2026-01-31 07:53:35.990321837 +0000 UTC m=+1047.844208196" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.341500 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.440511 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w65pb\" (UniqueName: \"kubernetes.io/projected/8cc9cec2-3064-4a80-8621-f84c37994a96-kube-api-access-w65pb\") pod \"8cc9cec2-3064-4a80-8621-f84c37994a96\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.440584 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-combined-ca-bundle\") pod \"8cc9cec2-3064-4a80-8621-f84c37994a96\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.440680 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-fernet-keys\") pod \"8cc9cec2-3064-4a80-8621-f84c37994a96\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.440820 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-credential-keys\") pod \"8cc9cec2-3064-4a80-8621-f84c37994a96\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.440874 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-config-data\") pod \"8cc9cec2-3064-4a80-8621-f84c37994a96\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.440979 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-scripts\") pod \"8cc9cec2-3064-4a80-8621-f84c37994a96\" (UID: \"8cc9cec2-3064-4a80-8621-f84c37994a96\") " Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.445463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc9cec2-3064-4a80-8621-f84c37994a96-kube-api-access-w65pb" (OuterVolumeSpecName: "kube-api-access-w65pb") pod "8cc9cec2-3064-4a80-8621-f84c37994a96" (UID: "8cc9cec2-3064-4a80-8621-f84c37994a96"). InnerVolumeSpecName "kube-api-access-w65pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.446099 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8cc9cec2-3064-4a80-8621-f84c37994a96" (UID: "8cc9cec2-3064-4a80-8621-f84c37994a96"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.463465 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-scripts" (OuterVolumeSpecName: "scripts") pod "8cc9cec2-3064-4a80-8621-f84c37994a96" (UID: "8cc9cec2-3064-4a80-8621-f84c37994a96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.463851 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8cc9cec2-3064-4a80-8621-f84c37994a96" (UID: "8cc9cec2-3064-4a80-8621-f84c37994a96"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.473695 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cc9cec2-3064-4a80-8621-f84c37994a96" (UID: "8cc9cec2-3064-4a80-8621-f84c37994a96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.474931 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-config-data" (OuterVolumeSpecName: "config-data") pod "8cc9cec2-3064-4a80-8621-f84c37994a96" (UID: "8cc9cec2-3064-4a80-8621-f84c37994a96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.543181 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.543225 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.543245 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.543262 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.543279 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w65pb\" (UniqueName: \"kubernetes.io/projected/8cc9cec2-3064-4a80-8621-f84c37994a96-kube-api-access-w65pb\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:37 crc kubenswrapper[4826]: I0131 07:53:37.543294 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc9cec2-3064-4a80-8621-f84c37994a96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.014327 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4swtq" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.014353 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4swtq" event={"ID":"8cc9cec2-3064-4a80-8621-f84c37994a96","Type":"ContainerDied","Data":"c9494f3fd1185484577d8051eb3dad7a21b22d2384d5257450d54d0655f28f10"} Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.015283 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9494f3fd1185484577d8051eb3dad7a21b22d2384d5257450d54d0655f28f10" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.016712 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerStarted","Data":"f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb"} Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.092701 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7676c745b9-d7652"] Jan 31 07:53:38 crc kubenswrapper[4826]: E0131 07:53:38.093220 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="init" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.093238 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="init" Jan 31 07:53:38 crc kubenswrapper[4826]: E0131 07:53:38.093252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.093260 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" Jan 31 07:53:38 crc kubenswrapper[4826]: E0131 07:53:38.093282 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc9cec2-3064-4a80-8621-f84c37994a96" containerName="keystone-bootstrap" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.093291 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc9cec2-3064-4a80-8621-f84c37994a96" containerName="keystone-bootstrap" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.093513 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc9cec2-3064-4a80-8621-f84c37994a96" containerName="keystone-bootstrap" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.093547 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a66b21-7551-4e7f-87dc-81bbb08f3c3e" containerName="dnsmasq-dns" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.094290 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.100303 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7676c745b9-d7652"] Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.100607 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.100862 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.104094 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.104660 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.104988 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.105065 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jzds9" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257106 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-internal-tls-certs\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257166 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-combined-ca-bundle\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257199 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-public-tls-certs\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-config-data\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wfl\" (UniqueName: \"kubernetes.io/projected/0abfb696-f207-4a48-983f-bf8b62f453d0-kube-api-access-75wfl\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257325 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-fernet-keys\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257397 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-scripts\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.257423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-credential-keys\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.358816 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-scripts\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.358869 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-credential-keys\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.358948 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-internal-tls-certs\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.358989 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-combined-ca-bundle\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.359020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-public-tls-certs\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.359051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-config-data\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.359078 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wfl\" (UniqueName: \"kubernetes.io/projected/0abfb696-f207-4a48-983f-bf8b62f453d0-kube-api-access-75wfl\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.359150 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-fernet-keys\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.364560 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-internal-tls-certs\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.364582 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-public-tls-certs\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.365338 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-combined-ca-bundle\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.366633 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-config-data\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.369465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-credential-keys\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.372517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-scripts\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.381803 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wfl\" (UniqueName: \"kubernetes.io/projected/0abfb696-f207-4a48-983f-bf8b62f453d0-kube-api-access-75wfl\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.393706 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0abfb696-f207-4a48-983f-bf8b62f453d0-fernet-keys\") pod \"keystone-7676c745b9-d7652\" (UID: \"0abfb696-f207-4a48-983f-bf8b62f453d0\") " pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.411250 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:38 crc kubenswrapper[4826]: I0131 07:53:38.879184 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7676c745b9-d7652"] Jan 31 07:53:38 crc kubenswrapper[4826]: W0131 07:53:38.890647 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0abfb696_f207_4a48_983f_bf8b62f453d0.slice/crio-12c805814136b56dbd01cef05d2a5f0a10dc571112fcee26b2eb7cbe2c42a971 WatchSource:0}: Error finding container 12c805814136b56dbd01cef05d2a5f0a10dc571112fcee26b2eb7cbe2c42a971: Status 404 returned error can't find the container with id 12c805814136b56dbd01cef05d2a5f0a10dc571112fcee26b2eb7cbe2c42a971 Jan 31 07:53:39 crc kubenswrapper[4826]: I0131 07:53:39.030742 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7676c745b9-d7652" event={"ID":"0abfb696-f207-4a48-983f-bf8b62f453d0","Type":"ContainerStarted","Data":"12c805814136b56dbd01cef05d2a5f0a10dc571112fcee26b2eb7cbe2c42a971"} Jan 31 07:53:39 crc kubenswrapper[4826]: I0131 07:53:39.034432 4826 generic.go:334] "Generic (PLEG): container finished" podID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" containerID="88f75ce2f7fe85a623b4c0ac927e45c9c8cebe12719557f5d69e1d9ddd4aeb7f" exitCode=0 Jan 31 07:53:39 crc kubenswrapper[4826]: I0131 07:53:39.034583 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s6dqh" event={"ID":"d994c6b6-3dd2-4231-b80b-b83c88fa860f","Type":"ContainerDied","Data":"88f75ce2f7fe85a623b4c0ac927e45c9c8cebe12719557f5d69e1d9ddd4aeb7f"} Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.043364 4826 generic.go:334] "Generic (PLEG): container finished" podID="08d490ca-1c3d-4823-8d2f-b4e2fca83778" containerID="6b98c2944e3fdc39da50df7f79ca2ff48ee1e3069a0e2528c32706cf8f89b727" exitCode=0 Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.043433 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfz9q" event={"ID":"08d490ca-1c3d-4823-8d2f-b4e2fca83778","Type":"ContainerDied","Data":"6b98c2944e3fdc39da50df7f79ca2ff48ee1e3069a0e2528c32706cf8f89b727"} Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.045315 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7676c745b9-d7652" event={"ID":"0abfb696-f207-4a48-983f-bf8b62f453d0","Type":"ContainerStarted","Data":"f158c6be720027be30236ad6c0a97d3bb2a489866947a93df7469fe832ba95b6"} Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.065659 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.068246 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.069879 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.093418 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7676c745b9-d7652" podStartSLOduration=2.09339785 podStartE2EDuration="2.09339785s" podCreationTimestamp="2026-01-31 07:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:40.085178968 +0000 UTC m=+1051.939065327" watchObservedRunningTime="2026-01-31 07:53:40.09339785 +0000 UTC m=+1051.947284229" Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.134822 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.134857 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:53:40 crc kubenswrapper[4826]: I0131 07:53:40.136762 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-97cdc8cb-tdpkc" podUID="e0626292-98a9-4e1f-8359-f734ed8a3118" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.463162 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.620118 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnldl\" (UniqueName: \"kubernetes.io/projected/d994c6b6-3dd2-4231-b80b-b83c88fa860f-kube-api-access-dnldl\") pod \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.620303 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-combined-ca-bundle\") pod \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.620350 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-db-sync-config-data\") pod \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\" (UID: \"d994c6b6-3dd2-4231-b80b-b83c88fa860f\") " Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.624725 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d994c6b6-3dd2-4231-b80b-b83c88fa860f" (UID: "d994c6b6-3dd2-4231-b80b-b83c88fa860f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.626326 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d994c6b6-3dd2-4231-b80b-b83c88fa860f-kube-api-access-dnldl" (OuterVolumeSpecName: "kube-api-access-dnldl") pod "d994c6b6-3dd2-4231-b80b-b83c88fa860f" (UID: "d994c6b6-3dd2-4231-b80b-b83c88fa860f"). InnerVolumeSpecName "kube-api-access-dnldl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.644753 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d994c6b6-3dd2-4231-b80b-b83c88fa860f" (UID: "d994c6b6-3dd2-4231-b80b-b83c88fa860f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.722835 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnldl\" (UniqueName: \"kubernetes.io/projected/d994c6b6-3dd2-4231-b80b-b83c88fa860f-kube-api-access-dnldl\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.722885 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:40.722895 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d994c6b6-3dd2-4231-b80b-b83c88fa860f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.055941 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s6dqh" event={"ID":"d994c6b6-3dd2-4231-b80b-b83c88fa860f","Type":"ContainerDied","Data":"b2aa3f88bd99909ffaa4ff692e58d150ef0d01a4ec35c6510892b7c1e0964d3f"} Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.056327 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2aa3f88bd99909ffaa4ff692e58d150ef0d01a4ec35c6510892b7c1e0964d3f" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.056484 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.057385 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s6dqh" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.319405 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-98b594959-ps2jz"] Jan 31 07:53:43 crc kubenswrapper[4826]: E0131 07:53:41.319792 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" containerName="barbican-db-sync" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.319811 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" containerName="barbican-db-sync" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.320097 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" containerName="barbican-db-sync" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.321188 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.323376 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s2tgd" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.324050 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.324324 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.355760 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-98b594959-ps2jz"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.398086 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-568c588dfd-kgqtq"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.399956 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.404434 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.410357 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-568c588dfd-kgqtq"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.441433 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2e144b-d873-4386-9328-24f745d25df7-logs\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.441496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-config-data-custom\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.441529 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnm7j\" (UniqueName: \"kubernetes.io/projected/8a2e144b-d873-4386-9328-24f745d25df7-kube-api-access-qnm7j\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.441683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-combined-ca-bundle\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.441889 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-config-data\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.458931 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-cr627"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.460542 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.494752 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-cr627"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.533508 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-cc9fbb874-77xn5"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.534956 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.536688 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547236 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-logs\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547333 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2e144b-d873-4386-9328-24f745d25df7-logs\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547365 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-combined-ca-bundle\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547406 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-config-data-custom\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547457 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-config-data-custom\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547485 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnm7j\" (UniqueName: \"kubernetes.io/projected/8a2e144b-d873-4386-9328-24f745d25df7-kube-api-access-qnm7j\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547532 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-combined-ca-bundle\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547584 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbtn\" (UniqueName: \"kubernetes.io/projected/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-kube-api-access-9wbtn\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547610 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-config-data\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.547652 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-config-data\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.554191 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-config-data-custom\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.554696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2e144b-d873-4386-9328-24f745d25df7-logs\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.555741 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-config-data\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.557034 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2e144b-d873-4386-9328-24f745d25df7-combined-ca-bundle\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.588860 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnm7j\" (UniqueName: \"kubernetes.io/projected/8a2e144b-d873-4386-9328-24f745d25df7-kube-api-access-qnm7j\") pod \"barbican-worker-98b594959-ps2jz\" (UID: \"8a2e144b-d873-4386-9328-24f745d25df7\") " pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.603333 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cc9fbb874-77xn5"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbtn\" (UniqueName: \"kubernetes.io/projected/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-kube-api-access-9wbtn\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649282 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-config\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649308 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-config-data\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649337 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-dns-svc\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649374 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6bt\" (UniqueName: \"kubernetes.io/projected/cb06446e-3fa7-4cda-95a2-073de531249a-kube-api-access-cb6bt\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649399 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-logs\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-combined-ca-bundle\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649463 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649478 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-config-data-custom\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-combined-ca-bundle\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dxq\" (UniqueName: \"kubernetes.io/projected/5b0711e4-1e31-482b-b555-5504ee5f62a7-kube-api-access-s4dxq\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649533 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649551 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649567 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0711e4-1e31-482b-b555-5504ee5f62a7-logs\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.649589 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data-custom\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.650509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-logs\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.653133 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-config-data-custom\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.654558 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-combined-ca-bundle\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.655022 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-config-data\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.663832 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-98b594959-ps2jz" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.665729 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbtn\" (UniqueName: \"kubernetes.io/projected/2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2-kube-api-access-9wbtn\") pod \"barbican-keystone-listener-568c588dfd-kgqtq\" (UID: \"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2\") " pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.716712 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.750800 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-dns-svc\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.750860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6bt\" (UniqueName: \"kubernetes.io/projected/cb06446e-3fa7-4cda-95a2-073de531249a-kube-api-access-cb6bt\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.750925 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.750943 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-combined-ca-bundle\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.750975 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dxq\" (UniqueName: \"kubernetes.io/projected/5b0711e4-1e31-482b-b555-5504ee5f62a7-kube-api-access-s4dxq\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.750996 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.751013 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.751029 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0711e4-1e31-482b-b555-5504ee5f62a7-logs\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.751051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data-custom\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.751091 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-config\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.756323 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-combined-ca-bundle\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.854146 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0711e4-1e31-482b-b555-5504ee5f62a7-logs\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.854203 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-config\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.855093 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.855114 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.855164 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-dns-svc\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.858740 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dxq\" (UniqueName: \"kubernetes.io/projected/5b0711e4-1e31-482b-b555-5504ee5f62a7-kube-api-access-s4dxq\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.859155 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.859736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6bt\" (UniqueName: \"kubernetes.io/projected/cb06446e-3fa7-4cda-95a2-073de531249a-kube-api-access-cb6bt\") pod \"dnsmasq-dns-7f46f79845-cr627\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.863482 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data-custom\") pod \"barbican-api-cc9fbb874-77xn5\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:41.930532 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:42.084732 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:43.587855 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-98b594959-ps2jz"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:43.602898 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cc9fbb874-77xn5"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:43.625059 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-cr627"] Jan 31 07:53:43 crc kubenswrapper[4826]: I0131 07:53:43.633233 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-568c588dfd-kgqtq"] Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.198982 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7fbf887dc4-4528v"] Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.215182 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.217272 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.218395 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.226872 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fbf887dc4-4528v"] Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316082 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-combined-ca-bundle\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316400 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-public-tls-certs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-internal-tls-certs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316533 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8nj\" (UniqueName: \"kubernetes.io/projected/91ee6dfb-fe40-4e3f-8719-6604432e07f5-kube-api-access-6x8nj\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316552 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-config-data-custom\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91ee6dfb-fe40-4e3f-8719-6604432e07f5-logs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.316605 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-config-data\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418509 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8nj\" (UniqueName: \"kubernetes.io/projected/91ee6dfb-fe40-4e3f-8719-6604432e07f5-kube-api-access-6x8nj\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418571 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-config-data-custom\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418606 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91ee6dfb-fe40-4e3f-8719-6604432e07f5-logs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418631 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-config-data\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418676 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-combined-ca-bundle\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418695 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-public-tls-certs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.418717 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-internal-tls-certs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.419084 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91ee6dfb-fe40-4e3f-8719-6604432e07f5-logs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.429299 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-config-data\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.429898 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-public-tls-certs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.432632 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-internal-tls-certs\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.438622 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-config-data-custom\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.442497 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ee6dfb-fe40-4e3f-8719-6604432e07f5-combined-ca-bundle\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.447657 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8nj\" (UniqueName: \"kubernetes.io/projected/91ee6dfb-fe40-4e3f-8719-6604432e07f5-kube-api-access-6x8nj\") pod \"barbican-api-7fbf887dc4-4528v\" (UID: \"91ee6dfb-fe40-4e3f-8719-6604432e07f5\") " pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:44 crc kubenswrapper[4826]: I0131 07:53:44.566914 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:48 crc kubenswrapper[4826]: I0131 07:53:48.138609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-cr627" event={"ID":"cb06446e-3fa7-4cda-95a2-073de531249a","Type":"ContainerStarted","Data":"db94498e6d357547e4ed5ba028b1772dbd91c92d63ea65916a55c87efbb55e74"} Jan 31 07:53:49 crc kubenswrapper[4826]: W0131 07:53:49.288654 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b0711e4_1e31_482b_b555_5504ee5f62a7.slice/crio-561acef887297b1455aa61384a972bd3753ab7677ae440bd8916affff24fd5ed WatchSource:0}: Error finding container 561acef887297b1455aa61384a972bd3753ab7677ae440bd8916affff24fd5ed: Status 404 returned error can't find the container with id 561acef887297b1455aa61384a972bd3753ab7677ae440bd8916affff24fd5ed Jan 31 07:53:49 crc kubenswrapper[4826]: W0131 07:53:49.296143 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2e144b_d873_4386_9328_24f745d25df7.slice/crio-a5f29b0589d9eda6a61decf0a7c8f2d3f3012a02ca714af4025e74bcd30c46b1 WatchSource:0}: Error finding container a5f29b0589d9eda6a61decf0a7c8f2d3f3012a02ca714af4025e74bcd30c46b1: Status 404 returned error can't find the container with id a5f29b0589d9eda6a61decf0a7c8f2d3f3012a02ca714af4025e74bcd30c46b1 Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.393480 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.518602 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-config\") pod \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.518787 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-combined-ca-bundle\") pod \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.518884 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmr6z\" (UniqueName: \"kubernetes.io/projected/08d490ca-1c3d-4823-8d2f-b4e2fca83778-kube-api-access-jmr6z\") pod \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\" (UID: \"08d490ca-1c3d-4823-8d2f-b4e2fca83778\") " Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.522877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d490ca-1c3d-4823-8d2f-b4e2fca83778-kube-api-access-jmr6z" (OuterVolumeSpecName: "kube-api-access-jmr6z") pod "08d490ca-1c3d-4823-8d2f-b4e2fca83778" (UID: "08d490ca-1c3d-4823-8d2f-b4e2fca83778"). InnerVolumeSpecName "kube-api-access-jmr6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.542276 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08d490ca-1c3d-4823-8d2f-b4e2fca83778" (UID: "08d490ca-1c3d-4823-8d2f-b4e2fca83778"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.545954 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-config" (OuterVolumeSpecName: "config") pod "08d490ca-1c3d-4823-8d2f-b4e2fca83778" (UID: "08d490ca-1c3d-4823-8d2f-b4e2fca83778"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.620887 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.620929 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmr6z\" (UniqueName: \"kubernetes.io/projected/08d490ca-1c3d-4823-8d2f-b4e2fca83778-kube-api-access-jmr6z\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:49 crc kubenswrapper[4826]: I0131 07:53:49.620940 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/08d490ca-1c3d-4823-8d2f-b4e2fca83778-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.059222 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.136123 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-97cdc8cb-tdpkc" podUID="e0626292-98a9-4e1f-8359-f734ed8a3118" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.163782 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfz9q" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.163772 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfz9q" event={"ID":"08d490ca-1c3d-4823-8d2f-b4e2fca83778","Type":"ContainerDied","Data":"4227c0d98d14e08af73c8f968f16106244dbf9055abf03406d9558dce2bf726a"} Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.163903 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4227c0d98d14e08af73c8f968f16106244dbf9055abf03406d9558dce2bf726a" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.165881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc9fbb874-77xn5" event={"ID":"5b0711e4-1e31-482b-b555-5504ee5f62a7","Type":"ContainerStarted","Data":"561acef887297b1455aa61384a972bd3753ab7677ae440bd8916affff24fd5ed"} Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.173132 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98b594959-ps2jz" event={"ID":"8a2e144b-d873-4386-9328-24f745d25df7","Type":"ContainerStarted","Data":"a5f29b0589d9eda6a61decf0a7c8f2d3f3012a02ca714af4025e74bcd30c46b1"} Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.174669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" event={"ID":"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2","Type":"ContainerStarted","Data":"9573e49c9a6f23fa563a37955e1d4c54afffd4b1ebabedd7846f7b95da2e9e21"} Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.306741 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fbf887dc4-4528v"] Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.734172 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-cr627"] Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.775580 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-c5gj8"] Jan 31 07:53:50 crc kubenswrapper[4826]: E0131 07:53:50.776543 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d490ca-1c3d-4823-8d2f-b4e2fca83778" containerName="neutron-db-sync" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.776557 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d490ca-1c3d-4823-8d2f-b4e2fca83778" containerName="neutron-db-sync" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.776718 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d490ca-1c3d-4823-8d2f-b4e2fca83778" containerName="neutron-db-sync" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.777632 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.896225 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-c5gj8"] Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.896295 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68cf7546dd-6tbvn"] Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.898409 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68cf7546dd-6tbvn"] Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.898515 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.902672 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.903057 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.903205 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.903218 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kxg8h" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.973166 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.973216 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.973266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrrmd\" (UniqueName: \"kubernetes.io/projected/e3f57b07-aade-4cd7-9ebf-b374396665b7-kube-api-access-jrrmd\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.973294 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-dns-svc\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:50 crc kubenswrapper[4826]: I0131 07:53:50.973325 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-config\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.074948 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pspxs\" (UniqueName: \"kubernetes.io/projected/313879a0-5213-4c68-aea1-ba01d960b862-kube-api-access-pspxs\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075049 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-httpd-config\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075132 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-ovndb-tls-certs\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075183 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075211 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-config\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075238 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-combined-ca-bundle\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075339 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrrmd\" (UniqueName: \"kubernetes.io/projected/e3f57b07-aade-4cd7-9ebf-b374396665b7-kube-api-access-jrrmd\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075374 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-dns-svc\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.075406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-config\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.076239 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.076605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-config\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.076808 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.077340 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-dns-svc\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.096675 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrrmd\" (UniqueName: \"kubernetes.io/projected/e3f57b07-aade-4cd7-9ebf-b374396665b7-kube-api-access-jrrmd\") pod \"dnsmasq-dns-869f779d85-c5gj8\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.136037 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.176639 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pspxs\" (UniqueName: \"kubernetes.io/projected/313879a0-5213-4c68-aea1-ba01d960b862-kube-api-access-pspxs\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.176724 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-httpd-config\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.176798 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-ovndb-tls-certs\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.176830 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-config\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.176864 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-combined-ca-bundle\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.181705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-ovndb-tls-certs\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.183810 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-httpd-config\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.184132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-config\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.187766 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-combined-ca-bundle\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.193130 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pspxs\" (UniqueName: \"kubernetes.io/projected/313879a0-5213-4c68-aea1-ba01d960b862-kube-api-access-pspxs\") pod \"neutron-68cf7546dd-6tbvn\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.201925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbf887dc4-4528v" event={"ID":"91ee6dfb-fe40-4e3f-8719-6604432e07f5","Type":"ContainerStarted","Data":"4696d8949bcce5a9b5436bb9a4d2a639b1beea22e774d8c5287ef7889b4d7da0"} Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.201982 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbf887dc4-4528v" event={"ID":"91ee6dfb-fe40-4e3f-8719-6604432e07f5","Type":"ContainerStarted","Data":"c807d2dad0bfc2d48f9d7c976cf5f67df1b65ab87118fb65dd50a81a5154d425"} Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.212136 4826 generic.go:334] "Generic (PLEG): container finished" podID="cb06446e-3fa7-4cda-95a2-073de531249a" containerID="a6e9136843ebe727380a0d57483475daa3a40c7b498dc11e19e528324c4e60e7" exitCode=0 Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.212220 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-cr627" event={"ID":"cb06446e-3fa7-4cda-95a2-073de531249a","Type":"ContainerDied","Data":"a6e9136843ebe727380a0d57483475daa3a40c7b498dc11e19e528324c4e60e7"} Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.220475 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc9fbb874-77xn5" event={"ID":"5b0711e4-1e31-482b-b555-5504ee5f62a7","Type":"ContainerStarted","Data":"dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11"} Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.231374 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:51 crc kubenswrapper[4826]: I0131 07:53:51.770503 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-c5gj8"] Jan 31 07:53:52 crc kubenswrapper[4826]: W0131 07:53:52.130439 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3f57b07_aade_4cd7_9ebf_b374396665b7.slice/crio-3f95802fa9c2d6351b7c9562ae46e2f18990ac72a387331f70638c6b70692257 WatchSource:0}: Error finding container 3f95802fa9c2d6351b7c9562ae46e2f18990ac72a387331f70638c6b70692257: Status 404 returned error can't find the container with id 3f95802fa9c2d6351b7c9562ae46e2f18990ac72a387331f70638c6b70692257 Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.240024 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9tmn2" event={"ID":"5f870e24-0e35-4ee6-805b-f81617554dc2","Type":"ContainerStarted","Data":"6430c1a2d7552a478c284e52145067d9f399ba42e28b9104d38881f5089df21f"} Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.246958 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" event={"ID":"e3f57b07-aade-4cd7-9ebf-b374396665b7","Type":"ContainerStarted","Data":"3f95802fa9c2d6351b7c9562ae46e2f18990ac72a387331f70638c6b70692257"} Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.256026 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerStarted","Data":"f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748"} Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.256284 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-central-agent" containerID="cri-o://d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22" gracePeriod=30 Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.256574 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.256635 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="proxy-httpd" containerID="cri-o://f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748" gracePeriod=30 Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.256693 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="sg-core" containerID="cri-o://f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb" gracePeriod=30 Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.256738 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-notification-agent" containerID="cri-o://17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c" gracePeriod=30 Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.270768 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9tmn2" podStartSLOduration=11.140987741 podStartE2EDuration="1m6.270746008s" podCreationTimestamp="2026-01-31 07:52:46 +0000 UTC" firstStartedPulling="2026-01-31 07:52:48.016628919 +0000 UTC m=+999.870515278" lastFinishedPulling="2026-01-31 07:53:43.146387186 +0000 UTC m=+1055.000273545" observedRunningTime="2026-01-31 07:53:52.266114927 +0000 UTC m=+1064.120001296" watchObservedRunningTime="2026-01-31 07:53:52.270746008 +0000 UTC m=+1064.124632367" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.272698 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7wvpt" event={"ID":"a88d711d-a1fe-4114-955e-167684da9ecb","Type":"ContainerStarted","Data":"6bfb7cae2f9574ea8239470c7fa42ec08a47a4f302cb2ddc409a1e7d904a9217"} Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.314701 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.740345525 podStartE2EDuration="1m6.314683357s" podCreationTimestamp="2026-01-31 07:52:46 +0000 UTC" firstStartedPulling="2026-01-31 07:52:48.016110265 +0000 UTC m=+999.869996624" lastFinishedPulling="2026-01-31 07:53:50.590448097 +0000 UTC m=+1062.444334456" observedRunningTime="2026-01-31 07:53:52.304314775 +0000 UTC m=+1064.158201144" watchObservedRunningTime="2026-01-31 07:53:52.314683357 +0000 UTC m=+1064.168569716" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.326252 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7wvpt" podStartSLOduration=3.770745513 podStartE2EDuration="1m6.326231313s" podCreationTimestamp="2026-01-31 07:52:46 +0000 UTC" firstStartedPulling="2026-01-31 07:52:48.035823561 +0000 UTC m=+999.889709920" lastFinishedPulling="2026-01-31 07:53:50.591309361 +0000 UTC m=+1062.445195720" observedRunningTime="2026-01-31 07:53:52.323767853 +0000 UTC m=+1064.177654222" watchObservedRunningTime="2026-01-31 07:53:52.326231313 +0000 UTC m=+1064.180117672" Jan 31 07:53:52 crc kubenswrapper[4826]: E0131 07:53:52.447389 4826 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 31 07:53:52 crc kubenswrapper[4826]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/cb06446e-3fa7-4cda-95a2-073de531249a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 31 07:53:52 crc kubenswrapper[4826]: > podSandboxID="db94498e6d357547e4ed5ba028b1772dbd91c92d63ea65916a55c87efbb55e74" Jan 31 07:53:52 crc kubenswrapper[4826]: E0131 07:53:52.447988 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 07:53:52 crc kubenswrapper[4826]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh88h689h75h5f9h595h5c6h5fbhdh54bh5f7h94h89hd9hd7h65bh655h5bdh649hcch594h696h685hf9h54dh5dch57fh556h5ddh57dh556h665q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cb6bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f46f79845-cr627_openstack(cb06446e-3fa7-4cda-95a2-073de531249a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/cb06446e-3fa7-4cda-95a2-073de531249a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 31 07:53:52 crc kubenswrapper[4826]: > logger="UnhandledError" Jan 31 07:53:52 crc kubenswrapper[4826]: E0131 07:53:52.449116 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/cb06446e-3fa7-4cda-95a2-073de531249a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7f46f79845-cr627" podUID="cb06446e-3fa7-4cda-95a2-073de531249a" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.756465 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68cf7546dd-6tbvn"] Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.830110 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-647f745999-xttjx"] Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.835741 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.846657 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.847649 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.847957 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-647f745999-xttjx"] Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.921316 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-internal-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.921461 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-config\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.921559 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-public-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.921695 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-ovndb-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.921820 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-httpd-config\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.921948 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-combined-ca-bundle\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:52 crc kubenswrapper[4826]: I0131 07:53:52.922110 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8t99\" (UniqueName: \"kubernetes.io/projected/bed86129-6185-49ee-9e65-0e4767f815fd-kube-api-access-q8t99\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.022860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8t99\" (UniqueName: \"kubernetes.io/projected/bed86129-6185-49ee-9e65-0e4767f815fd-kube-api-access-q8t99\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.022963 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-internal-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.023013 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-config\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.023038 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-public-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.023076 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-ovndb-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.023130 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-httpd-config\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.023180 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-combined-ca-bundle\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.027552 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-ovndb-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.029239 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-internal-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.030386 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-httpd-config\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.031867 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-config\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.031854 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-public-tls-certs\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.033756 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed86129-6185-49ee-9e65-0e4767f815fd-combined-ca-bundle\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.042986 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8t99\" (UniqueName: \"kubernetes.io/projected/bed86129-6185-49ee-9e65-0e4767f815fd-kube-api-access-q8t99\") pod \"neutron-647f745999-xttjx\" (UID: \"bed86129-6185-49ee-9e65-0e4767f815fd\") " pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.153019 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.288584 4826 generic.go:334] "Generic (PLEG): container finished" podID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerID="841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46" exitCode=0 Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.288871 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" event={"ID":"e3f57b07-aade-4cd7-9ebf-b374396665b7","Type":"ContainerDied","Data":"841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.303287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerStarted","Data":"1e4614c706b3579d5a422e7d0c3032ddb55e30f30a76a1c0282d344336d32834"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.303335 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerStarted","Data":"9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.303351 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerStarted","Data":"5be7d321f06e02fcdeb4a44f02f4a91c2af986792f910a68c979f796ba053ff0"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.304400 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.325180 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" event={"ID":"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2","Type":"ContainerStarted","Data":"5ef8f5aa9df8eda056c05cd75c4c2eabc51f52865b27d618afb2935988be05b0"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.325225 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" event={"ID":"2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2","Type":"ContainerStarted","Data":"ce194d167dfbcd5c8496ba16ae2b1f5d1f61eb03fcca819c9156acc40fe685ff"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.345314 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerID="f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748" exitCode=0 Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.345348 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerID="f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb" exitCode=2 Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.345359 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerID="d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22" exitCode=0 Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.345441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerDied","Data":"f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.345475 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerDied","Data":"f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.345488 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerDied","Data":"d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.359437 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc9fbb874-77xn5" event={"ID":"5b0711e4-1e31-482b-b555-5504ee5f62a7","Type":"ContainerStarted","Data":"01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.359579 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.359607 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.369843 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98b594959-ps2jz" event={"ID":"8a2e144b-d873-4386-9328-24f745d25df7","Type":"ContainerStarted","Data":"38627639a6e7b6f59d988952503ddb48f47a5f39da488c4c3dd3732db19472c4"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.369887 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98b594959-ps2jz" event={"ID":"8a2e144b-d873-4386-9328-24f745d25df7","Type":"ContainerStarted","Data":"90a9c8ba212dac4e2ab2621fcd92536fd4e8c44e38a7275e4b45dc51276a8b2d"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.378956 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbf887dc4-4528v" event={"ID":"91ee6dfb-fe40-4e3f-8719-6604432e07f5","Type":"ContainerStarted","Data":"4bc30f42f04ed662af5dcc2ae72721eb59abe3ba1405ec2e1c15b8de6d51acc2"} Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.379047 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.391584 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68cf7546dd-6tbvn" podStartSLOduration=3.391562029 podStartE2EDuration="3.391562029s" podCreationTimestamp="2026-01-31 07:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:53.335214649 +0000 UTC m=+1065.189101008" watchObservedRunningTime="2026-01-31 07:53:53.391562029 +0000 UTC m=+1065.245448388" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.425694 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-568c588dfd-kgqtq" podStartSLOduration=9.584609542 podStartE2EDuration="12.425673191s" podCreationTimestamp="2026-01-31 07:53:41 +0000 UTC" firstStartedPulling="2026-01-31 07:53:49.30209446 +0000 UTC m=+1061.155980819" lastFinishedPulling="2026-01-31 07:53:52.143158109 +0000 UTC m=+1063.997044468" observedRunningTime="2026-01-31 07:53:53.374898358 +0000 UTC m=+1065.228784717" watchObservedRunningTime="2026-01-31 07:53:53.425673191 +0000 UTC m=+1065.279559550" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.442811 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-cc9fbb874-77xn5" podStartSLOduration=12.442792983 podStartE2EDuration="12.442792983s" podCreationTimestamp="2026-01-31 07:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:53.400557152 +0000 UTC m=+1065.254443511" watchObservedRunningTime="2026-01-31 07:53:53.442792983 +0000 UTC m=+1065.296679342" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.463093 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-98b594959-ps2jz" podStartSLOduration=9.301492947 podStartE2EDuration="12.463075125s" podCreationTimestamp="2026-01-31 07:53:41 +0000 UTC" firstStartedPulling="2026-01-31 07:53:49.302477891 +0000 UTC m=+1061.156364250" lastFinishedPulling="2026-01-31 07:53:52.464060069 +0000 UTC m=+1064.317946428" observedRunningTime="2026-01-31 07:53:53.425435504 +0000 UTC m=+1065.279321863" watchObservedRunningTime="2026-01-31 07:53:53.463075125 +0000 UTC m=+1065.316961484" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.472074 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7fbf887dc4-4528v" podStartSLOduration=9.472059939 podStartE2EDuration="9.472059939s" podCreationTimestamp="2026-01-31 07:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:53.456573132 +0000 UTC m=+1065.310459501" watchObservedRunningTime="2026-01-31 07:53:53.472059939 +0000 UTC m=+1065.325946298" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.787314 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-647f745999-xttjx"] Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.793050 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.844726 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-nb\") pod \"cb06446e-3fa7-4cda-95a2-073de531249a\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.844798 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-dns-svc\") pod \"cb06446e-3fa7-4cda-95a2-073de531249a\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.844821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-config\") pod \"cb06446e-3fa7-4cda-95a2-073de531249a\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.844895 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb6bt\" (UniqueName: \"kubernetes.io/projected/cb06446e-3fa7-4cda-95a2-073de531249a-kube-api-access-cb6bt\") pod \"cb06446e-3fa7-4cda-95a2-073de531249a\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.844927 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-sb\") pod \"cb06446e-3fa7-4cda-95a2-073de531249a\" (UID: \"cb06446e-3fa7-4cda-95a2-073de531249a\") " Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.849062 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb06446e-3fa7-4cda-95a2-073de531249a-kube-api-access-cb6bt" (OuterVolumeSpecName: "kube-api-access-cb6bt") pod "cb06446e-3fa7-4cda-95a2-073de531249a" (UID: "cb06446e-3fa7-4cda-95a2-073de531249a"). InnerVolumeSpecName "kube-api-access-cb6bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.909225 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb06446e-3fa7-4cda-95a2-073de531249a" (UID: "cb06446e-3fa7-4cda-95a2-073de531249a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.912547 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb06446e-3fa7-4cda-95a2-073de531249a" (UID: "cb06446e-3fa7-4cda-95a2-073de531249a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.913584 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-config" (OuterVolumeSpecName: "config") pod "cb06446e-3fa7-4cda-95a2-073de531249a" (UID: "cb06446e-3fa7-4cda-95a2-073de531249a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.917001 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb06446e-3fa7-4cda-95a2-073de531249a" (UID: "cb06446e-3fa7-4cda-95a2-073de531249a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.947268 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb6bt\" (UniqueName: \"kubernetes.io/projected/cb06446e-3fa7-4cda-95a2-073de531249a-kube-api-access-cb6bt\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.947310 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.947325 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.947338 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:53 crc kubenswrapper[4826]: I0131 07:53:53.947350 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb06446e-3fa7-4cda-95a2-073de531249a-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.387617 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" event={"ID":"e3f57b07-aade-4cd7-9ebf-b374396665b7","Type":"ContainerStarted","Data":"94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362"} Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.467213 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/0.log" Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.469223 4826 generic.go:334] "Generic (PLEG): container finished" podID="313879a0-5213-4c68-aea1-ba01d960b862" containerID="1e4614c706b3579d5a422e7d0c3032ddb55e30f30a76a1c0282d344336d32834" exitCode=1 Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.469272 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerDied","Data":"1e4614c706b3579d5a422e7d0c3032ddb55e30f30a76a1c0282d344336d32834"} Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.470110 4826 scope.go:117] "RemoveContainer" containerID="1e4614c706b3579d5a422e7d0c3032ddb55e30f30a76a1c0282d344336d32834" Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.473034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-cr627" event={"ID":"cb06446e-3fa7-4cda-95a2-073de531249a","Type":"ContainerDied","Data":"db94498e6d357547e4ed5ba028b1772dbd91c92d63ea65916a55c87efbb55e74"} Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.473081 4826 scope.go:117] "RemoveContainer" containerID="a6e9136843ebe727380a0d57483475daa3a40c7b498dc11e19e528324c4e60e7" Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.473140 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-cr627" Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.476291 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647f745999-xttjx" event={"ID":"bed86129-6185-49ee-9e65-0e4767f815fd","Type":"ContainerStarted","Data":"3ac9312867da6870ee752d29bd5e02297e3ab52406adfb83593925673327214e"} Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.476396 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647f745999-xttjx" event={"ID":"bed86129-6185-49ee-9e65-0e4767f815fd","Type":"ContainerStarted","Data":"0ef2f28bd818e01084186eaf631aa4b9c299afc224ece5c9da0cbd86406f793a"} Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.477592 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.550412 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-cr627"] Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.557707 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-cr627"] Jan 31 07:53:54 crc kubenswrapper[4826]: I0131 07:53:54.818795 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb06446e-3fa7-4cda-95a2-073de531249a" path="/var/lib/kubelet/pods/cb06446e-3fa7-4cda-95a2-073de531249a/volumes" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.486556 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/1.log" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.487378 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/0.log" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.487740 4826 generic.go:334] "Generic (PLEG): container finished" podID="313879a0-5213-4c68-aea1-ba01d960b862" containerID="c01120e09b8903b374262a0150835f4359857d64714f5cbc119dae4f0735dd62" exitCode=1 Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.487811 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerDied","Data":"c01120e09b8903b374262a0150835f4359857d64714f5cbc119dae4f0735dd62"} Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.487849 4826 scope.go:117] "RemoveContainer" containerID="1e4614c706b3579d5a422e7d0c3032ddb55e30f30a76a1c0282d344336d32834" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.489090 4826 scope.go:117] "RemoveContainer" containerID="c01120e09b8903b374262a0150835f4359857d64714f5cbc119dae4f0735dd62" Jan 31 07:53:55 crc kubenswrapper[4826]: E0131 07:53:55.489718 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-68cf7546dd-6tbvn_openstack(313879a0-5213-4c68-aea1-ba01d960b862)\"" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.492950 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647f745999-xttjx" event={"ID":"bed86129-6185-49ee-9e65-0e4767f815fd","Type":"ContainerStarted","Data":"6c3952560ad474dfa99738e0cd56a1d533b50b5e41f53b9504f763fec3476afa"} Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.493941 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-647f745999-xttjx" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.494091 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.537421 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" podStartSLOduration=5.53740395 podStartE2EDuration="5.53740395s" podCreationTimestamp="2026-01-31 07:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:55.526875813 +0000 UTC m=+1067.380762192" watchObservedRunningTime="2026-01-31 07:53:55.53740395 +0000 UTC m=+1067.391290309" Jan 31 07:53:55 crc kubenswrapper[4826]: I0131 07:53:55.555908 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-647f745999-xttjx" podStartSLOduration=3.555885531 podStartE2EDuration="3.555885531s" podCreationTimestamp="2026-01-31 07:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:53:55.547171325 +0000 UTC m=+1067.401057684" watchObservedRunningTime="2026-01-31 07:53:55.555885531 +0000 UTC m=+1067.409771890" Jan 31 07:53:56 crc kubenswrapper[4826]: I0131 07:53:56.509340 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/1.log" Jan 31 07:53:56 crc kubenswrapper[4826]: I0131 07:53:56.510860 4826 scope.go:117] "RemoveContainer" containerID="c01120e09b8903b374262a0150835f4359857d64714f5cbc119dae4f0735dd62" Jan 31 07:53:56 crc kubenswrapper[4826]: E0131 07:53:56.511140 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-68cf7546dd-6tbvn_openstack(313879a0-5213-4c68-aea1-ba01d960b862)\"" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.372589 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.462565 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.520796 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerID="17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c" exitCode=0 Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.520836 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerDied","Data":"17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c"} Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.520868 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.520882 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa070000-7471-4ca5-be06-fecc9ade01cc","Type":"ContainerDied","Data":"9db15d96000174118790939aeeb0319d12dae90660f2da384d425aa9b3bb03c9"} Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.520902 4826 scope.go:117] "RemoveContainer" containerID="f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.544758 4826 scope.go:117] "RemoveContainer" containerID="f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.607044 4826 scope.go:117] "RemoveContainer" containerID="17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.647755 4826 scope.go:117] "RemoveContainer" containerID="d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.647836 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-scripts\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.648156 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jhgn\" (UniqueName: \"kubernetes.io/projected/aa070000-7471-4ca5-be06-fecc9ade01cc-kube-api-access-2jhgn\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.648243 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-combined-ca-bundle\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.648280 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-sg-core-conf-yaml\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.648531 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-run-httpd\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.648668 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-log-httpd\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.648836 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-config-data\") pod \"aa070000-7471-4ca5-be06-fecc9ade01cc\" (UID: \"aa070000-7471-4ca5-be06-fecc9ade01cc\") " Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.649649 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.649736 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.650629 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.651655 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa070000-7471-4ca5-be06-fecc9ade01cc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.656378 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa070000-7471-4ca5-be06-fecc9ade01cc-kube-api-access-2jhgn" (OuterVolumeSpecName: "kube-api-access-2jhgn") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "kube-api-access-2jhgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.657083 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-scripts" (OuterVolumeSpecName: "scripts") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.681891 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.724057 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.754390 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.754425 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jhgn\" (UniqueName: \"kubernetes.io/projected/aa070000-7471-4ca5-be06-fecc9ade01cc-kube-api-access-2jhgn\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.754442 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.754453 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.765269 4826 scope.go:117] "RemoveContainer" containerID="f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.765760 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748\": container with ID starting with f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748 not found: ID does not exist" containerID="f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.765809 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748"} err="failed to get container status \"f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748\": rpc error: code = NotFound desc = could not find container \"f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748\": container with ID starting with f919b8bf555818e47450e1ebb47ad74a022b57b3566d4435dbf20addb484b748 not found: ID does not exist" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.765835 4826 scope.go:117] "RemoveContainer" containerID="f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.766212 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb\": container with ID starting with f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb not found: ID does not exist" containerID="f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.766245 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb"} err="failed to get container status \"f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb\": rpc error: code = NotFound desc = could not find container \"f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb\": container with ID starting with f9951f160e424196ebfaad13dcf7673cb65b037f73fbf4a7ae5c323cff2298fb not found: ID does not exist" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.766271 4826 scope.go:117] "RemoveContainer" containerID="17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.766396 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-config-data" (OuterVolumeSpecName: "config-data") pod "aa070000-7471-4ca5-be06-fecc9ade01cc" (UID: "aa070000-7471-4ca5-be06-fecc9ade01cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.766590 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c\": container with ID starting with 17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c not found: ID does not exist" containerID="17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.766639 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c"} err="failed to get container status \"17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c\": rpc error: code = NotFound desc = could not find container \"17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c\": container with ID starting with 17674081ae36580c2b2b7856a9e11a4918d86dfc10b600f73f0a93fc2e589d0c not found: ID does not exist" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.766670 4826 scope.go:117] "RemoveContainer" containerID="d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.767117 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22\": container with ID starting with d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22 not found: ID does not exist" containerID="d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.767151 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22"} err="failed to get container status \"d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22\": rpc error: code = NotFound desc = could not find container \"d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22\": container with ID starting with d5cd64b5e2b6e6e4bc49e7cec4d47b1dd0f9d4639b8c152986c303fc79a0bb22 not found: ID does not exist" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.859290 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.860151 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa070000-7471-4ca5-be06-fecc9ade01cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.867779 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897130 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.897524 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-notification-agent" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897541 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-notification-agent" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.897560 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-central-agent" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897567 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-central-agent" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.897575 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb06446e-3fa7-4cda-95a2-073de531249a" containerName="init" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897617 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb06446e-3fa7-4cda-95a2-073de531249a" containerName="init" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.897624 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="proxy-httpd" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897629 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="proxy-httpd" Jan 31 07:53:57 crc kubenswrapper[4826]: E0131 07:53:57.897640 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="sg-core" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897646 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="sg-core" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897812 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="proxy-httpd" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897821 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-central-agent" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897831 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="ceilometer-notification-agent" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897837 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb06446e-3fa7-4cda-95a2-073de531249a" containerName="init" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.897855 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" containerName="sg-core" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.899360 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.901910 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.902944 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.906931 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.961828 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.961889 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-config-data\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.961922 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-log-httpd\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.962135 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltlrw\" (UniqueName: \"kubernetes.io/projected/f2d75005-df86-4e3e-aeb4-d8d98976f35a-kube-api-access-ltlrw\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.962206 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-scripts\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.962297 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-run-httpd\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:57 crc kubenswrapper[4826]: I0131 07:53:57.962378 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064135 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064528 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064568 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-config-data\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064597 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-log-httpd\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064641 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltlrw\" (UniqueName: \"kubernetes.io/projected/f2d75005-df86-4e3e-aeb4-d8d98976f35a-kube-api-access-ltlrw\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064669 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-scripts\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.064699 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-run-httpd\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.065385 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-run-httpd\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.065398 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-log-httpd\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.070067 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-config-data\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.076736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.079851 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-scripts\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.080991 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.081292 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltlrw\" (UniqueName: \"kubernetes.io/projected/f2d75005-df86-4e3e-aeb4-d8d98976f35a-kube-api-access-ltlrw\") pod \"ceilometer-0\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.241121 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.445326 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.668440 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:53:58 crc kubenswrapper[4826]: W0131 07:53:58.675308 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d75005_df86_4e3e_aeb4_d8d98976f35a.slice/crio-174e92eeacf7db5aa575c0234b9097a30f89eb14334f2a45643d8c852ec0b7d1 WatchSource:0}: Error finding container 174e92eeacf7db5aa575c0234b9097a30f89eb14334f2a45643d8c852ec0b7d1: Status 404 returned error can't find the container with id 174e92eeacf7db5aa575c0234b9097a30f89eb14334f2a45643d8c852ec0b7d1 Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.841156 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa070000-7471-4ca5-be06-fecc9ade01cc" path="/var/lib/kubelet/pods/aa070000-7471-4ca5-be06-fecc9ade01cc/volumes" Jan 31 07:53:58 crc kubenswrapper[4826]: I0131 07:53:58.910595 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fbf887dc4-4528v" Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.009810 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-cc9fbb874-77xn5"] Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.010239 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-cc9fbb874-77xn5" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" containerID="cri-o://01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4" gracePeriod=30 Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.010189 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-cc9fbb874-77xn5" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api-log" containerID="cri-o://dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11" gracePeriod=30 Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.025702 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-cc9fbb874-77xn5" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": EOF" Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.538395 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerStarted","Data":"174e92eeacf7db5aa575c0234b9097a30f89eb14334f2a45643d8c852ec0b7d1"} Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.540557 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerID="dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11" exitCode=143 Jan 31 07:53:59 crc kubenswrapper[4826]: I0131 07:53:59.540594 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc9fbb874-77xn5" event={"ID":"5b0711e4-1e31-482b-b555-5504ee5f62a7","Type":"ContainerDied","Data":"dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11"} Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.137156 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.193190 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk"] Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.202283 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerName="dnsmasq-dns" containerID="cri-o://43e9078d02a053ded7991e041ff4a9e5716edf6586f9e136da7e0989f2369d97" gracePeriod=10 Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.558017 4826 generic.go:334] "Generic (PLEG): container finished" podID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerID="43e9078d02a053ded7991e041ff4a9e5716edf6586f9e136da7e0989f2369d97" exitCode=0 Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.558089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" event={"ID":"7bb7d47c-ac09-4830-9cfe-d7042b2a5971","Type":"ContainerDied","Data":"43e9078d02a053ded7991e041ff4a9e5716edf6586f9e136da7e0989f2369d97"} Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.832137 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.861380 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq579\" (UniqueName: \"kubernetes.io/projected/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-kube-api-access-zq579\") pod \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.861606 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-dns-svc\") pod \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.861638 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-nb\") pod \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.861685 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-config\") pod \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.861716 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-sb\") pod \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\" (UID: \"7bb7d47c-ac09-4830-9cfe-d7042b2a5971\") " Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.870294 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-kube-api-access-zq579" (OuterVolumeSpecName: "kube-api-access-zq579") pod "7bb7d47c-ac09-4830-9cfe-d7042b2a5971" (UID: "7bb7d47c-ac09-4830-9cfe-d7042b2a5971"). InnerVolumeSpecName "kube-api-access-zq579". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.916937 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-config" (OuterVolumeSpecName: "config") pod "7bb7d47c-ac09-4830-9cfe-d7042b2a5971" (UID: "7bb7d47c-ac09-4830-9cfe-d7042b2a5971"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.921450 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7bb7d47c-ac09-4830-9cfe-d7042b2a5971" (UID: "7bb7d47c-ac09-4830-9cfe-d7042b2a5971"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.930265 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7bb7d47c-ac09-4830-9cfe-d7042b2a5971" (UID: "7bb7d47c-ac09-4830-9cfe-d7042b2a5971"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.934852 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7bb7d47c-ac09-4830-9cfe-d7042b2a5971" (UID: "7bb7d47c-ac09-4830-9cfe-d7042b2a5971"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.964340 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.964391 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.964406 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.964418 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:01 crc kubenswrapper[4826]: I0131 07:54:01.964429 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq579\" (UniqueName: \"kubernetes.io/projected/7bb7d47c-ac09-4830-9cfe-d7042b2a5971-kube-api-access-zq579\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.129125 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.168906 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.568623 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerStarted","Data":"1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6"} Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.570606 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" event={"ID":"7bb7d47c-ac09-4830-9cfe-d7042b2a5971","Type":"ContainerDied","Data":"de900a3f751fd09f7faf943fef86e6653ea1abcadc540deac0f694ae487bcdfd"} Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.570670 4826 scope.go:117] "RemoveContainer" containerID="43e9078d02a053ded7991e041ff4a9e5716edf6586f9e136da7e0989f2369d97" Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.570694 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk" Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.613781 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk"] Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.619795 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kj4lk"] Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.790098 4826 scope.go:117] "RemoveContainer" containerID="265ddb6c06c8d4329e071fe2bdd6537d27f52edd6b6f51a09ea0e1790c47e5e1" Jan 31 07:54:02 crc kubenswrapper[4826]: I0131 07:54:02.818824 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" path="/var/lib/kubelet/pods/7bb7d47c-ac09-4830-9cfe-d7042b2a5971/volumes" Jan 31 07:54:03 crc kubenswrapper[4826]: I0131 07:54:03.451580 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-cc9fbb874-77xn5" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": read tcp 10.217.0.2:35018->10.217.0.149:9311: read: connection reset by peer" Jan 31 07:54:03 crc kubenswrapper[4826]: I0131 07:54:03.451576 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-cc9fbb874-77xn5" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": read tcp 10.217.0.2:35010->10.217.0.149:9311: read: connection reset by peer" Jan 31 07:54:03 crc kubenswrapper[4826]: I0131 07:54:03.452401 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-cc9fbb874-77xn5" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": dial tcp 10.217.0.149:9311: connect: connection refused" Jan 31 07:54:03 crc kubenswrapper[4826]: I0131 07:54:03.750376 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:54:03 crc kubenswrapper[4826]: I0131 07:54:03.930230 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-97cdc8cb-tdpkc" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.001827 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65776456b6-g6gkf"] Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.593273 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.597260 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerStarted","Data":"1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5"} Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.599222 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerID="01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4" exitCode=0 Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.599398 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon-log" containerID="cri-o://9464619301db5a0a09586180fbf9f2778935545892902aad640d046b21fb8b23" gracePeriod=30 Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.599569 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cc9fbb874-77xn5" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.599910 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc9fbb874-77xn5" event={"ID":"5b0711e4-1e31-482b-b555-5504ee5f62a7","Type":"ContainerDied","Data":"01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4"} Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.599933 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cc9fbb874-77xn5" event={"ID":"5b0711e4-1e31-482b-b555-5504ee5f62a7","Type":"ContainerDied","Data":"561acef887297b1455aa61384a972bd3753ab7677ae440bd8916affff24fd5ed"} Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.599948 4826 scope.go:117] "RemoveContainer" containerID="01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.600057 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" containerID="cri-o://dddfa3da3163de69516aa5a255f94e47556a43a50db12dac45b8f563367d6e90" gracePeriod=30 Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.631741 4826 scope.go:117] "RemoveContainer" containerID="dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.655063 4826 scope.go:117] "RemoveContainer" containerID="01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4" Jan 31 07:54:04 crc kubenswrapper[4826]: E0131 07:54:04.657276 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4\": container with ID starting with 01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4 not found: ID does not exist" containerID="01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.657336 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4"} err="failed to get container status \"01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4\": rpc error: code = NotFound desc = could not find container \"01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4\": container with ID starting with 01c0c03487677908f511711267ba7c8af4a1feff1268f2c74475f1328de58aa4 not found: ID does not exist" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.657368 4826 scope.go:117] "RemoveContainer" containerID="dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11" Jan 31 07:54:04 crc kubenswrapper[4826]: E0131 07:54:04.660216 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11\": container with ID starting with dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11 not found: ID does not exist" containerID="dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.660267 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11"} err="failed to get container status \"dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11\": rpc error: code = NotFound desc = could not find container \"dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11\": container with ID starting with dd57054deec4526da446e743403a3557b4f26b0041a869280e1ad2d6dc27cb11 not found: ID does not exist" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.713667 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data-custom\") pod \"5b0711e4-1e31-482b-b555-5504ee5f62a7\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.713945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0711e4-1e31-482b-b555-5504ee5f62a7-logs\") pod \"5b0711e4-1e31-482b-b555-5504ee5f62a7\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.714269 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data\") pod \"5b0711e4-1e31-482b-b555-5504ee5f62a7\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.714393 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dxq\" (UniqueName: \"kubernetes.io/projected/5b0711e4-1e31-482b-b555-5504ee5f62a7-kube-api-access-s4dxq\") pod \"5b0711e4-1e31-482b-b555-5504ee5f62a7\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.714528 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-combined-ca-bundle\") pod \"5b0711e4-1e31-482b-b555-5504ee5f62a7\" (UID: \"5b0711e4-1e31-482b-b555-5504ee5f62a7\") " Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.714592 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0711e4-1e31-482b-b555-5504ee5f62a7-logs" (OuterVolumeSpecName: "logs") pod "5b0711e4-1e31-482b-b555-5504ee5f62a7" (UID: "5b0711e4-1e31-482b-b555-5504ee5f62a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.715129 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0711e4-1e31-482b-b555-5504ee5f62a7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.720415 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5b0711e4-1e31-482b-b555-5504ee5f62a7" (UID: "5b0711e4-1e31-482b-b555-5504ee5f62a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.725384 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0711e4-1e31-482b-b555-5504ee5f62a7-kube-api-access-s4dxq" (OuterVolumeSpecName: "kube-api-access-s4dxq") pod "5b0711e4-1e31-482b-b555-5504ee5f62a7" (UID: "5b0711e4-1e31-482b-b555-5504ee5f62a7"). InnerVolumeSpecName "kube-api-access-s4dxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.746677 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b0711e4-1e31-482b-b555-5504ee5f62a7" (UID: "5b0711e4-1e31-482b-b555-5504ee5f62a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.771989 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data" (OuterVolumeSpecName: "config-data") pod "5b0711e4-1e31-482b-b555-5504ee5f62a7" (UID: "5b0711e4-1e31-482b-b555-5504ee5f62a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.816820 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.816876 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dxq\" (UniqueName: \"kubernetes.io/projected/5b0711e4-1e31-482b-b555-5504ee5f62a7-kube-api-access-s4dxq\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.816886 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.816896 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b0711e4-1e31-482b-b555-5504ee5f62a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.937493 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-cc9fbb874-77xn5"] Jan 31 07:54:04 crc kubenswrapper[4826]: I0131 07:54:04.946865 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-cc9fbb874-77xn5"] Jan 31 07:54:06 crc kubenswrapper[4826]: I0131 07:54:06.619752 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerStarted","Data":"73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac"} Jan 31 07:54:06 crc kubenswrapper[4826]: I0131 07:54:06.823451 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" path="/var/lib/kubelet/pods/5b0711e4-1e31-482b-b555-5504ee5f62a7/volumes" Jan 31 07:54:07 crc kubenswrapper[4826]: I0131 07:54:07.634197 4826 generic.go:334] "Generic (PLEG): container finished" podID="a88d711d-a1fe-4114-955e-167684da9ecb" containerID="6bfb7cae2f9574ea8239470c7fa42ec08a47a4f302cb2ddc409a1e7d904a9217" exitCode=0 Jan 31 07:54:07 crc kubenswrapper[4826]: I0131 07:54:07.634547 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7wvpt" event={"ID":"a88d711d-a1fe-4114-955e-167684da9ecb","Type":"ContainerDied","Data":"6bfb7cae2f9574ea8239470c7fa42ec08a47a4f302cb2ddc409a1e7d904a9217"} Jan 31 07:54:08 crc kubenswrapper[4826]: I0131 07:54:08.666057 4826 generic.go:334] "Generic (PLEG): container finished" podID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerID="dddfa3da3163de69516aa5a255f94e47556a43a50db12dac45b8f563367d6e90" exitCode=0 Jan 31 07:54:08 crc kubenswrapper[4826]: I0131 07:54:08.666529 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65776456b6-g6gkf" event={"ID":"e4282d7e-76a0-493b-b2c6-0d954d4bed7a","Type":"ContainerDied","Data":"dddfa3da3163de69516aa5a255f94e47556a43a50db12dac45b8f563367d6e90"} Jan 31 07:54:08 crc kubenswrapper[4826]: I0131 07:54:08.670669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerStarted","Data":"a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50"} Jan 31 07:54:08 crc kubenswrapper[4826]: I0131 07:54:08.670856 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:54:08 crc kubenswrapper[4826]: I0131 07:54:08.706668 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.882296362 podStartE2EDuration="11.706648943s" podCreationTimestamp="2026-01-31 07:53:57 +0000 UTC" firstStartedPulling="2026-01-31 07:53:58.677832102 +0000 UTC m=+1070.531718461" lastFinishedPulling="2026-01-31 07:54:07.502184673 +0000 UTC m=+1079.356071042" observedRunningTime="2026-01-31 07:54:08.704101992 +0000 UTC m=+1080.557988381" watchObservedRunningTime="2026-01-31 07:54:08.706648943 +0000 UTC m=+1080.560535302" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.222467 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7wvpt" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.300620 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88d711d-a1fe-4114-955e-167684da9ecb-logs\") pod \"a88d711d-a1fe-4114-955e-167684da9ecb\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.300669 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcp88\" (UniqueName: \"kubernetes.io/projected/a88d711d-a1fe-4114-955e-167684da9ecb-kube-api-access-vcp88\") pod \"a88d711d-a1fe-4114-955e-167684da9ecb\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.300731 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-scripts\") pod \"a88d711d-a1fe-4114-955e-167684da9ecb\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.300946 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-config-data\") pod \"a88d711d-a1fe-4114-955e-167684da9ecb\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.300984 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-combined-ca-bundle\") pod \"a88d711d-a1fe-4114-955e-167684da9ecb\" (UID: \"a88d711d-a1fe-4114-955e-167684da9ecb\") " Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.301196 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a88d711d-a1fe-4114-955e-167684da9ecb-logs" (OuterVolumeSpecName: "logs") pod "a88d711d-a1fe-4114-955e-167684da9ecb" (UID: "a88d711d-a1fe-4114-955e-167684da9ecb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.301616 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a88d711d-a1fe-4114-955e-167684da9ecb-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.306404 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88d711d-a1fe-4114-955e-167684da9ecb-kube-api-access-vcp88" (OuterVolumeSpecName: "kube-api-access-vcp88") pod "a88d711d-a1fe-4114-955e-167684da9ecb" (UID: "a88d711d-a1fe-4114-955e-167684da9ecb"). InnerVolumeSpecName "kube-api-access-vcp88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.307636 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-scripts" (OuterVolumeSpecName: "scripts") pod "a88d711d-a1fe-4114-955e-167684da9ecb" (UID: "a88d711d-a1fe-4114-955e-167684da9ecb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.326194 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a88d711d-a1fe-4114-955e-167684da9ecb" (UID: "a88d711d-a1fe-4114-955e-167684da9ecb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.342410 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-config-data" (OuterVolumeSpecName: "config-data") pod "a88d711d-a1fe-4114-955e-167684da9ecb" (UID: "a88d711d-a1fe-4114-955e-167684da9ecb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.402830 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.402859 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.402873 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcp88\" (UniqueName: \"kubernetes.io/projected/a88d711d-a1fe-4114-955e-167684da9ecb-kube-api-access-vcp88\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.402885 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88d711d-a1fe-4114-955e-167684da9ecb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.713611 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7wvpt" event={"ID":"a88d711d-a1fe-4114-955e-167684da9ecb","Type":"ContainerDied","Data":"7dbccca5688a3f27b052f06b29b1b83c3f399b24e29b07d4b1d3bb2f6a28ac72"} Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.713665 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbccca5688a3f27b052f06b29b1b83c3f399b24e29b07d4b1d3bb2f6a28ac72" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.713686 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7wvpt" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.808776 4826 scope.go:117] "RemoveContainer" containerID="c01120e09b8903b374262a0150835f4359857d64714f5cbc119dae4f0735dd62" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.857523 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78574fb98-ztjmr"] Jan 31 07:54:09 crc kubenswrapper[4826]: E0131 07:54:09.858062 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerName="dnsmasq-dns" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.858079 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerName="dnsmasq-dns" Jan 31 07:54:09 crc kubenswrapper[4826]: E0131 07:54:09.858096 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88d711d-a1fe-4114-955e-167684da9ecb" containerName="placement-db-sync" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.858103 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88d711d-a1fe-4114-955e-167684da9ecb" containerName="placement-db-sync" Jan 31 07:54:09 crc kubenswrapper[4826]: E0131 07:54:09.858118 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerName="init" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.858144 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerName="init" Jan 31 07:54:09 crc kubenswrapper[4826]: E0131 07:54:09.858165 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.858172 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" Jan 31 07:54:09 crc kubenswrapper[4826]: E0131 07:54:09.858571 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api-log" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.858885 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api-log" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.859325 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.859352 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88d711d-a1fe-4114-955e-167684da9ecb" containerName="placement-db-sync" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.859366 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb7d47c-ac09-4830-9cfe-d7042b2a5971" containerName="dnsmasq-dns" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.859384 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0711e4-1e31-482b-b555-5504ee5f62a7" containerName="barbican-api-log" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.860406 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.862947 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.865331 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hg7vn" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.865703 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.865740 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.866942 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 07:54:09 crc kubenswrapper[4826]: I0131 07:54:09.877433 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78574fb98-ztjmr"] Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.024139 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526125ca-e810-4cbb-9b5d-5631848e89e3-logs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.024477 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-public-tls-certs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.024627 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-internal-tls-certs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.024719 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-scripts\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.024830 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-config-data\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.025159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rhw\" (UniqueName: \"kubernetes.io/projected/526125ca-e810-4cbb-9b5d-5631848e89e3-kube-api-access-84rhw\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.025306 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-combined-ca-bundle\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.060217 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84rhw\" (UniqueName: \"kubernetes.io/projected/526125ca-e810-4cbb-9b5d-5631848e89e3-kube-api-access-84rhw\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127321 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-combined-ca-bundle\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127371 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526125ca-e810-4cbb-9b5d-5631848e89e3-logs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127410 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-public-tls-certs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-internal-tls-certs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127470 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-scripts\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.127485 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-config-data\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.128203 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526125ca-e810-4cbb-9b5d-5631848e89e3-logs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.133129 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-public-tls-certs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.133661 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-combined-ca-bundle\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.134163 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-scripts\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.134575 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-internal-tls-certs\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.136871 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526125ca-e810-4cbb-9b5d-5631848e89e3-config-data\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.155090 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84rhw\" (UniqueName: \"kubernetes.io/projected/526125ca-e810-4cbb-9b5d-5631848e89e3-kube-api-access-84rhw\") pod \"placement-78574fb98-ztjmr\" (UID: \"526125ca-e810-4cbb-9b5d-5631848e89e3\") " pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.246879 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.533468 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7676c745b9-d7652" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.727142 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/1.log" Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.728638 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerStarted","Data":"4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977"} Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.729556 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:54:10 crc kubenswrapper[4826]: W0131 07:54:10.790857 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod526125ca_e810_4cbb_9b5d_5631848e89e3.slice/crio-9d9b8d0c0c7aa2719204b9f24605341d52d446461d6168514a344135ef880bd8 WatchSource:0}: Error finding container 9d9b8d0c0c7aa2719204b9f24605341d52d446461d6168514a344135ef880bd8: Status 404 returned error can't find the container with id 9d9b8d0c0c7aa2719204b9f24605341d52d446461d6168514a344135ef880bd8 Jan 31 07:54:10 crc kubenswrapper[4826]: I0131 07:54:10.791923 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78574fb98-ztjmr"] Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.739853 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78574fb98-ztjmr" event={"ID":"526125ca-e810-4cbb-9b5d-5631848e89e3","Type":"ContainerStarted","Data":"216921c1cc4c8ea9b6ea0990f37119a1f6764e442413a6a9f48631d8f99328d4"} Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.740126 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78574fb98-ztjmr" event={"ID":"526125ca-e810-4cbb-9b5d-5631848e89e3","Type":"ContainerStarted","Data":"9d9b8d0c0c7aa2719204b9f24605341d52d446461d6168514a344135ef880bd8"} Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.741664 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/2.log" Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.742232 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/1.log" Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.742679 4826 generic.go:334] "Generic (PLEG): container finished" podID="313879a0-5213-4c68-aea1-ba01d960b862" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" exitCode=1 Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.742712 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerDied","Data":"4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977"} Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.742743 4826 scope.go:117] "RemoveContainer" containerID="c01120e09b8903b374262a0150835f4359857d64714f5cbc119dae4f0735dd62" Jan 31 07:54:11 crc kubenswrapper[4826]: I0131 07:54:11.743378 4826 scope.go:117] "RemoveContainer" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" Jan 31 07:54:11 crc kubenswrapper[4826]: E0131 07:54:11.743583 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-68cf7546dd-6tbvn_openstack(313879a0-5213-4c68-aea1-ba01d960b862)\"" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" Jan 31 07:54:12 crc kubenswrapper[4826]: I0131 07:54:12.753321 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78574fb98-ztjmr" event={"ID":"526125ca-e810-4cbb-9b5d-5631848e89e3","Type":"ContainerStarted","Data":"151198ce81c6f97b22837f6332694b6deb56f7acc06ae71867dc0852e2524def"} Jan 31 07:54:12 crc kubenswrapper[4826]: I0131 07:54:12.753783 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:12 crc kubenswrapper[4826]: I0131 07:54:12.753813 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:12 crc kubenswrapper[4826]: I0131 07:54:12.755364 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/2.log" Jan 31 07:54:12 crc kubenswrapper[4826]: I0131 07:54:12.756400 4826 scope.go:117] "RemoveContainer" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" Jan 31 07:54:12 crc kubenswrapper[4826]: E0131 07:54:12.756679 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-68cf7546dd-6tbvn_openstack(313879a0-5213-4c68-aea1-ba01d960b862)\"" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" Jan 31 07:54:12 crc kubenswrapper[4826]: I0131 07:54:12.785356 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78574fb98-ztjmr" podStartSLOduration=3.785328718 podStartE2EDuration="3.785328718s" podCreationTimestamp="2026-01-31 07:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:12.782699954 +0000 UTC m=+1084.636586333" watchObservedRunningTime="2026-01-31 07:54:12.785328718 +0000 UTC m=+1084.639215087" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.057003 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.058123 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.063358 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.067109 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.067184 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ztsks" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.068639 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.118840 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b637c7b-111d-4820-acc3-9cd5bff101e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.118913 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b637c7b-111d-4820-acc3-9cd5bff101e7-openstack-config\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.118956 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hppq5\" (UniqueName: \"kubernetes.io/projected/4b637c7b-111d-4820-acc3-9cd5bff101e7-kube-api-access-hppq5\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.118999 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b637c7b-111d-4820-acc3-9cd5bff101e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.220009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b637c7b-111d-4820-acc3-9cd5bff101e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.220071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b637c7b-111d-4820-acc3-9cd5bff101e7-openstack-config\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.220098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hppq5\" (UniqueName: \"kubernetes.io/projected/4b637c7b-111d-4820-acc3-9cd5bff101e7-kube-api-access-hppq5\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.220121 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b637c7b-111d-4820-acc3-9cd5bff101e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.221342 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b637c7b-111d-4820-acc3-9cd5bff101e7-openstack-config\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.225548 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b637c7b-111d-4820-acc3-9cd5bff101e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.238639 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b637c7b-111d-4820-acc3-9cd5bff101e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.242459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hppq5\" (UniqueName: \"kubernetes.io/projected/4b637c7b-111d-4820-acc3-9cd5bff101e7-kube-api-access-hppq5\") pod \"openstackclient\" (UID: \"4b637c7b-111d-4820-acc3-9cd5bff101e7\") " pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.376470 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 07:54:15 crc kubenswrapper[4826]: I0131 07:54:15.851233 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 07:54:16 crc kubenswrapper[4826]: I0131 07:54:16.789034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4b637c7b-111d-4820-acc3-9cd5bff101e7","Type":"ContainerStarted","Data":"a9a26938d25253bcaf7fe8757ec9cf33e385c31ab5f3b8648f1096a73e4e2388"} Jan 31 07:54:20 crc kubenswrapper[4826]: I0131 07:54:20.059479 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 31 07:54:21 crc kubenswrapper[4826]: I0131 07:54:21.232105 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:54:21 crc kubenswrapper[4826]: I0131 07:54:21.233265 4826 scope.go:117] "RemoveContainer" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" Jan 31 07:54:21 crc kubenswrapper[4826]: E0131 07:54:21.233485 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-68cf7546dd-6tbvn_openstack(313879a0-5213-4c68-aea1-ba01d960b862)\"" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" Jan 31 07:54:21 crc kubenswrapper[4826]: I0131 07:54:21.235277 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.152:9696/\": dial tcp 10.217.0.152:9696: connect: connection refused" Jan 31 07:54:23 crc kubenswrapper[4826]: I0131 07:54:23.168655 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-647f745999-xttjx" Jan 31 07:54:23 crc kubenswrapper[4826]: I0131 07:54:23.238198 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68cf7546dd-6tbvn"] Jan 31 07:54:23 crc kubenswrapper[4826]: I0131 07:54:23.238441 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68cf7546dd-6tbvn" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-api" containerID="cri-o://9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076" gracePeriod=30 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.183089 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.183833 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-central-agent" containerID="cri-o://1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6" gracePeriod=30 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.185037 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="proxy-httpd" containerID="cri-o://a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50" gracePeriod=30 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.185073 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-notification-agent" containerID="cri-o://1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5" gracePeriod=30 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.185051 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="sg-core" containerID="cri-o://73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac" gracePeriod=30 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.199803 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.154:3000/\": EOF" Jan 31 07:54:25 crc kubenswrapper[4826]: E0131 07:54:25.227391 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d75005_df86_4e3e_aeb4_d8d98976f35a.slice/crio-73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac.scope\": RecentStats: unable to find data in memory cache]" Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.899736 4826 generic.go:334] "Generic (PLEG): container finished" podID="5f870e24-0e35-4ee6-805b-f81617554dc2" containerID="6430c1a2d7552a478c284e52145067d9f399ba42e28b9104d38881f5089df21f" exitCode=0 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.899800 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9tmn2" event={"ID":"5f870e24-0e35-4ee6-805b-f81617554dc2","Type":"ContainerDied","Data":"6430c1a2d7552a478c284e52145067d9f399ba42e28b9104d38881f5089df21f"} Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.914723 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerID="a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50" exitCode=0 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.914771 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerID="73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac" exitCode=2 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.914784 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerID="1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6" exitCode=0 Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.914811 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerDied","Data":"a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50"} Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.914842 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerDied","Data":"73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac"} Jan 31 07:54:25 crc kubenswrapper[4826]: I0131 07:54:25.914875 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerDied","Data":"1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6"} Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.349279 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-sg-core-conf-yaml\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452395 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltlrw\" (UniqueName: \"kubernetes.io/projected/f2d75005-df86-4e3e-aeb4-d8d98976f35a-kube-api-access-ltlrw\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452455 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-run-httpd\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452491 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-log-httpd\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452518 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-config-data\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452535 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-scripts\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.452666 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-combined-ca-bundle\") pod \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\" (UID: \"f2d75005-df86-4e3e-aeb4-d8d98976f35a\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.453079 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.453137 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.464875 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-scripts" (OuterVolumeSpecName: "scripts") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.464912 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d75005-df86-4e3e-aeb4-d8d98976f35a-kube-api-access-ltlrw" (OuterVolumeSpecName: "kube-api-access-ltlrw") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "kube-api-access-ltlrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.479129 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.554866 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltlrw\" (UniqueName: \"kubernetes.io/projected/f2d75005-df86-4e3e-aeb4-d8d98976f35a-kube-api-access-ltlrw\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.554899 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.554908 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2d75005-df86-4e3e-aeb4-d8d98976f35a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.554916 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.554925 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.556216 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.579488 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-config-data" (OuterVolumeSpecName: "config-data") pod "f2d75005-df86-4e3e-aeb4-d8d98976f35a" (UID: "f2d75005-df86-4e3e-aeb4-d8d98976f35a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.632386 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/2.log" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.632794 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.656273 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.656306 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d75005-df86-4e3e-aeb4-d8d98976f35a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.757211 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-ovndb-tls-certs\") pod \"313879a0-5213-4c68-aea1-ba01d960b862\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.757345 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pspxs\" (UniqueName: \"kubernetes.io/projected/313879a0-5213-4c68-aea1-ba01d960b862-kube-api-access-pspxs\") pod \"313879a0-5213-4c68-aea1-ba01d960b862\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.757430 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-httpd-config\") pod \"313879a0-5213-4c68-aea1-ba01d960b862\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.757453 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-config\") pod \"313879a0-5213-4c68-aea1-ba01d960b862\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.757493 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-combined-ca-bundle\") pod \"313879a0-5213-4c68-aea1-ba01d960b862\" (UID: \"313879a0-5213-4c68-aea1-ba01d960b862\") " Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.760100 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313879a0-5213-4c68-aea1-ba01d960b862-kube-api-access-pspxs" (OuterVolumeSpecName: "kube-api-access-pspxs") pod "313879a0-5213-4c68-aea1-ba01d960b862" (UID: "313879a0-5213-4c68-aea1-ba01d960b862"). InnerVolumeSpecName "kube-api-access-pspxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.768130 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "313879a0-5213-4c68-aea1-ba01d960b862" (UID: "313879a0-5213-4c68-aea1-ba01d960b862"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.808131 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "313879a0-5213-4c68-aea1-ba01d960b862" (UID: "313879a0-5213-4c68-aea1-ba01d960b862"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.816378 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-config" (OuterVolumeSpecName: "config") pod "313879a0-5213-4c68-aea1-ba01d960b862" (UID: "313879a0-5213-4c68-aea1-ba01d960b862"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.820091 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "313879a0-5213-4c68-aea1-ba01d960b862" (UID: "313879a0-5213-4c68-aea1-ba01d960b862"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.859634 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.859677 4826 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.859690 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pspxs\" (UniqueName: \"kubernetes.io/projected/313879a0-5213-4c68-aea1-ba01d960b862-kube-api-access-pspxs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.859700 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.859709 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/313879a0-5213-4c68-aea1-ba01d960b862-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.924420 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68cf7546dd-6tbvn_313879a0-5213-4c68-aea1-ba01d960b862/neutron-httpd/2.log" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.925242 4826 generic.go:334] "Generic (PLEG): container finished" podID="313879a0-5213-4c68-aea1-ba01d960b862" containerID="9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076" exitCode=0 Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.925284 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cf7546dd-6tbvn" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.925301 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerDied","Data":"9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076"} Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.925370 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cf7546dd-6tbvn" event={"ID":"313879a0-5213-4c68-aea1-ba01d960b862","Type":"ContainerDied","Data":"5be7d321f06e02fcdeb4a44f02f4a91c2af986792f910a68c979f796ba053ff0"} Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.925392 4826 scope.go:117] "RemoveContainer" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.927003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4b637c7b-111d-4820-acc3-9cd5bff101e7","Type":"ContainerStarted","Data":"7e0613d9e0022dbeef1f55d685510463613210aca810d339fa97aed1c608c302"} Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.937077 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerID="1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5" exitCode=0 Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.937308 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.939272 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerDied","Data":"1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5"} Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.939316 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2d75005-df86-4e3e-aeb4-d8d98976f35a","Type":"ContainerDied","Data":"174e92eeacf7db5aa575c0234b9097a30f89eb14334f2a45643d8c852ec0b7d1"} Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.957842 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.821528513 podStartE2EDuration="11.957825356s" podCreationTimestamp="2026-01-31 07:54:15 +0000 UTC" firstStartedPulling="2026-01-31 07:54:15.844805436 +0000 UTC m=+1087.698691785" lastFinishedPulling="2026-01-31 07:54:25.981102269 +0000 UTC m=+1097.834988628" observedRunningTime="2026-01-31 07:54:26.948922245 +0000 UTC m=+1098.802808604" watchObservedRunningTime="2026-01-31 07:54:26.957825356 +0000 UTC m=+1098.811711715" Jan 31 07:54:26 crc kubenswrapper[4826]: I0131 07:54:26.979066 4826 scope.go:117] "RemoveContainer" containerID="9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.014115 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.020836 4826 scope.go:117] "RemoveContainer" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.021315 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977\": container with ID starting with 4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977 not found: ID does not exist" containerID="4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.021356 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977"} err="failed to get container status \"4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977\": rpc error: code = NotFound desc = could not find container \"4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977\": container with ID starting with 4db933e4a0a303016c167a965f877ab0c90541e77b1feabac0bb9cab3f66b977 not found: ID does not exist" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.021384 4826 scope.go:117] "RemoveContainer" containerID="9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.021710 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076\": container with ID starting with 9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076 not found: ID does not exist" containerID="9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.021732 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076"} err="failed to get container status \"9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076\": rpc error: code = NotFound desc = could not find container \"9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076\": container with ID starting with 9bd9e8f153ab6c9e67db29717d848102a56a73a6967314d527fb428a5d54c076 not found: ID does not exist" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.021745 4826 scope.go:117] "RemoveContainer" containerID="a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.033859 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.042047 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68cf7546dd-6tbvn"] Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.044138 4826 scope.go:117] "RemoveContainer" containerID="73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.057087 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68cf7546dd-6tbvn"] Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.062085 4826 scope.go:117] "RemoveContainer" containerID="1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065447 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065750 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065768 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065782 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065789 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065804 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="sg-core" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065810 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="sg-core" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065822 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-central-agent" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065828 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-central-agent" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065836 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-notification-agent" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065842 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-notification-agent" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065853 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065859 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065873 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-api" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065881 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-api" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.065897 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="proxy-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.065904 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="proxy-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066086 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="proxy-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066101 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-central-agent" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066114 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066126 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-api" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066140 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="sg-core" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066147 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" containerName="ceilometer-notification-agent" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066161 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.066493 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="313879a0-5213-4c68-aea1-ba01d960b862" containerName="neutron-httpd" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.067681 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.070315 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.072395 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.077087 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.108602 4826 scope.go:117] "RemoveContainer" containerID="1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.133187 4826 scope.go:117] "RemoveContainer" containerID="a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.133677 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50\": container with ID starting with a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50 not found: ID does not exist" containerID="a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.133723 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50"} err="failed to get container status \"a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50\": rpc error: code = NotFound desc = could not find container \"a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50\": container with ID starting with a37f21593adee6b6495319bf6750fe35cb833d6e8f580ddc1cbfe1a14ea0be50 not found: ID does not exist" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.133753 4826 scope.go:117] "RemoveContainer" containerID="73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.134239 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac\": container with ID starting with 73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac not found: ID does not exist" containerID="73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.134312 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac"} err="failed to get container status \"73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac\": rpc error: code = NotFound desc = could not find container \"73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac\": container with ID starting with 73d71736ed29b84242eede086b224cdfd09bd58f6626b4f9c43354e61314f8ac not found: ID does not exist" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.134342 4826 scope.go:117] "RemoveContainer" containerID="1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.134571 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5\": container with ID starting with 1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5 not found: ID does not exist" containerID="1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.134627 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5"} err="failed to get container status \"1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5\": rpc error: code = NotFound desc = could not find container \"1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5\": container with ID starting with 1e6c96aa5ee46f7ad34c73b40bfe5e94ffdae3d77a2bfe92142ebccbf9c0e2d5 not found: ID does not exist" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.134645 4826 scope.go:117] "RemoveContainer" containerID="1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6" Jan 31 07:54:27 crc kubenswrapper[4826]: E0131 07:54:27.134921 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6\": container with ID starting with 1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6 not found: ID does not exist" containerID="1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.134998 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6"} err="failed to get container status \"1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6\": rpc error: code = NotFound desc = could not find container \"1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6\": container with ID starting with 1c2032cd17b33f5e9e1acff36aaab4ffbcd111e9d8500c3e0a24c2838b2350c6 not found: ID does not exist" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166018 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-run-httpd\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166105 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-config-data\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166125 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-scripts\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166170 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-log-httpd\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166208 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.166315 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbz72\" (UniqueName: \"kubernetes.io/projected/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-kube-api-access-pbz72\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.267950 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-log-httpd\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.268235 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.268282 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.268317 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbz72\" (UniqueName: \"kubernetes.io/projected/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-kube-api-access-pbz72\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.268365 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-run-httpd\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.268406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-config-data\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.268423 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-scripts\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.269596 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-log-httpd\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.270005 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-run-httpd\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.275293 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.275642 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-scripts\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.277860 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-config-data\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.278148 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.290934 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbz72\" (UniqueName: \"kubernetes.io/projected/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-kube-api-access-pbz72\") pod \"ceilometer-0\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.342797 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.386555 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.471101 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-combined-ca-bundle\") pod \"5f870e24-0e35-4ee6-805b-f81617554dc2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.471211 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-db-sync-config-data\") pod \"5f870e24-0e35-4ee6-805b-f81617554dc2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.471274 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f870e24-0e35-4ee6-805b-f81617554dc2-etc-machine-id\") pod \"5f870e24-0e35-4ee6-805b-f81617554dc2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.471353 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-config-data\") pod \"5f870e24-0e35-4ee6-805b-f81617554dc2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.471431 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-scripts\") pod \"5f870e24-0e35-4ee6-805b-f81617554dc2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.471500 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55cdz\" (UniqueName: \"kubernetes.io/projected/5f870e24-0e35-4ee6-805b-f81617554dc2-kube-api-access-55cdz\") pod \"5f870e24-0e35-4ee6-805b-f81617554dc2\" (UID: \"5f870e24-0e35-4ee6-805b-f81617554dc2\") " Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.472761 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f870e24-0e35-4ee6-805b-f81617554dc2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f870e24-0e35-4ee6-805b-f81617554dc2" (UID: "5f870e24-0e35-4ee6-805b-f81617554dc2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.512047 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5f870e24-0e35-4ee6-805b-f81617554dc2" (UID: "5f870e24-0e35-4ee6-805b-f81617554dc2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.512180 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f870e24-0e35-4ee6-805b-f81617554dc2-kube-api-access-55cdz" (OuterVolumeSpecName: "kube-api-access-55cdz") pod "5f870e24-0e35-4ee6-805b-f81617554dc2" (UID: "5f870e24-0e35-4ee6-805b-f81617554dc2"). InnerVolumeSpecName "kube-api-access-55cdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.515347 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-scripts" (OuterVolumeSpecName: "scripts") pod "5f870e24-0e35-4ee6-805b-f81617554dc2" (UID: "5f870e24-0e35-4ee6-805b-f81617554dc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.522099 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f870e24-0e35-4ee6-805b-f81617554dc2" (UID: "5f870e24-0e35-4ee6-805b-f81617554dc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.538855 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-config-data" (OuterVolumeSpecName: "config-data") pod "5f870e24-0e35-4ee6-805b-f81617554dc2" (UID: "5f870e24-0e35-4ee6-805b-f81617554dc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.578445 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.578494 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55cdz\" (UniqueName: \"kubernetes.io/projected/5f870e24-0e35-4ee6-805b-f81617554dc2-kube-api-access-55cdz\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.578507 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.578515 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.578523 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f870e24-0e35-4ee6-805b-f81617554dc2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.578556 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f870e24-0e35-4ee6-805b-f81617554dc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.855573 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:27 crc kubenswrapper[4826]: W0131 07:54:27.870686 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0591b228_aa43_46bc_ba04_0b5b6ddd4bbc.slice/crio-04af8a3095dffda69e9ba31ef7f4b6da7d1adbc026225ff57faa5732e03d89ef WatchSource:0}: Error finding container 04af8a3095dffda69e9ba31ef7f4b6da7d1adbc026225ff57faa5732e03d89ef: Status 404 returned error can't find the container with id 04af8a3095dffda69e9ba31ef7f4b6da7d1adbc026225ff57faa5732e03d89ef Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.954552 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerStarted","Data":"04af8a3095dffda69e9ba31ef7f4b6da7d1adbc026225ff57faa5732e03d89ef"} Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.958436 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9tmn2" event={"ID":"5f870e24-0e35-4ee6-805b-f81617554dc2","Type":"ContainerDied","Data":"79e9122fbb9950ca807b58594ba6afea87e8bed2970ae1d926f339f44793c582"} Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.958472 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e9122fbb9950ca807b58594ba6afea87e8bed2970ae1d926f339f44793c582" Jan 31 07:54:27 crc kubenswrapper[4826]: I0131 07:54:27.958532 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9tmn2" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.147507 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:28 crc kubenswrapper[4826]: E0131 07:54:28.148281 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f870e24-0e35-4ee6-805b-f81617554dc2" containerName="cinder-db-sync" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.148300 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f870e24-0e35-4ee6-805b-f81617554dc2" containerName="cinder-db-sync" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.148470 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f870e24-0e35-4ee6-805b-f81617554dc2" containerName="cinder-db-sync" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.149439 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.154147 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.154223 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.154392 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cpp27" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.154435 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.179044 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.223423 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-w9dvm"] Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.224735 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.232867 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-w9dvm"] Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.302548 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.302604 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.302670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.302724 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfx4m\" (UniqueName: \"kubernetes.io/projected/ea3eca99-d298-4fa1-9c82-fb58164ff654-kube-api-access-zfx4m\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.302758 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.302840 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea3eca99-d298-4fa1-9c82-fb58164ff654-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404709 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-config\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404747 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404771 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea3eca99-d298-4fa1-9c82-fb58164ff654-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404797 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvtt\" (UniqueName: \"kubernetes.io/projected/5854fd2e-9542-4dda-80f2-8ebf777c620b-kube-api-access-mwvtt\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404834 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-dns-svc\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404865 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404883 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404919 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404940 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.404978 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfx4m\" (UniqueName: \"kubernetes.io/projected/ea3eca99-d298-4fa1-9c82-fb58164ff654-kube-api-access-zfx4m\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.405001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.405541 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea3eca99-d298-4fa1-9c82-fb58164ff654-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.410350 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.412312 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.415776 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.439750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.444670 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfx4m\" (UniqueName: \"kubernetes.io/projected/ea3eca99-d298-4fa1-9c82-fb58164ff654-kube-api-access-zfx4m\") pod \"cinder-scheduler-0\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.477791 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.479428 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.480582 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.483556 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.505960 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.506912 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.508214 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.507359 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-config\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.508269 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.508308 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvtt\" (UniqueName: \"kubernetes.io/projected/5854fd2e-9542-4dda-80f2-8ebf777c620b-kube-api-access-mwvtt\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.508352 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-dns-svc\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.508901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-dns-svc\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.509428 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-config\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.509921 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.528250 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvtt\" (UniqueName: \"kubernetes.io/projected/5854fd2e-9542-4dda-80f2-8ebf777c620b-kube-api-access-mwvtt\") pod \"dnsmasq-dns-58db5546cc-w9dvm\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.562392 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610000 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxq5d\" (UniqueName: \"kubernetes.io/projected/7235dcfc-73e2-4205-a092-4258b08eabe0-kube-api-access-dxq5d\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610079 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610189 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7235dcfc-73e2-4205-a092-4258b08eabe0-logs\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610258 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610318 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7235dcfc-73e2-4205-a092-4258b08eabe0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.610454 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-scripts\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712340 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-scripts\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712412 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxq5d\" (UniqueName: \"kubernetes.io/projected/7235dcfc-73e2-4205-a092-4258b08eabe0-kube-api-access-dxq5d\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712455 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712512 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712537 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7235dcfc-73e2-4205-a092-4258b08eabe0-logs\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712557 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712595 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7235dcfc-73e2-4205-a092-4258b08eabe0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.712744 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7235dcfc-73e2-4205-a092-4258b08eabe0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.713646 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7235dcfc-73e2-4205-a092-4258b08eabe0-logs\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.720228 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.721610 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-scripts\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.722205 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.723627 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.733680 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxq5d\" (UniqueName: \"kubernetes.io/projected/7235dcfc-73e2-4205-a092-4258b08eabe0-kube-api-access-dxq5d\") pod \"cinder-api-0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.827254 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313879a0-5213-4c68-aea1-ba01d960b862" path="/var/lib/kubelet/pods/313879a0-5213-4c68-aea1-ba01d960b862/volumes" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.828285 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d75005-df86-4e3e-aeb4-d8d98976f35a" path="/var/lib/kubelet/pods/f2d75005-df86-4e3e-aeb4-d8d98976f35a/volumes" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.873436 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:54:28 crc kubenswrapper[4826]: I0131 07:54:28.984281 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerStarted","Data":"b99c0412e30eee04b65ba5f993f1ad996bf373e19499c1e2a3ace1b97544888b"} Jan 31 07:54:29 crc kubenswrapper[4826]: I0131 07:54:29.098534 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:29 crc kubenswrapper[4826]: I0131 07:54:29.221328 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-w9dvm"] Jan 31 07:54:29 crc kubenswrapper[4826]: W0131 07:54:29.452226 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7235dcfc_73e2_4205_a092_4258b08eabe0.slice/crio-b47a8a0c7bd729190e55741bac08232f53bbd251be8a9d5f078939818bcf411e WatchSource:0}: Error finding container b47a8a0c7bd729190e55741bac08232f53bbd251be8a9d5f078939818bcf411e: Status 404 returned error can't find the container with id b47a8a0c7bd729190e55741bac08232f53bbd251be8a9d5f078939818bcf411e Jan 31 07:54:29 crc kubenswrapper[4826]: I0131 07:54:29.458591 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:29 crc kubenswrapper[4826]: I0131 07:54:29.996604 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea3eca99-d298-4fa1-9c82-fb58164ff654","Type":"ContainerStarted","Data":"559a21d499bf23b1b15042321dcbd95e26f8846fcaa453c6515a5f1876b3eadd"} Jan 31 07:54:29 crc kubenswrapper[4826]: I0131 07:54:29.997407 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7235dcfc-73e2-4205-a092-4258b08eabe0","Type":"ContainerStarted","Data":"b47a8a0c7bd729190e55741bac08232f53bbd251be8a9d5f078939818bcf411e"} Jan 31 07:54:29 crc kubenswrapper[4826]: I0131 07:54:29.999558 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerStarted","Data":"7dbba72c9c71413ec934c77101c62cebd531e324c4cfe5d4ff980c78b356abd1"} Jan 31 07:54:30 crc kubenswrapper[4826]: I0131 07:54:30.001570 4826 generic.go:334] "Generic (PLEG): container finished" podID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerID="16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0" exitCode=0 Jan 31 07:54:30 crc kubenswrapper[4826]: I0131 07:54:30.001594 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" event={"ID":"5854fd2e-9542-4dda-80f2-8ebf777c620b","Type":"ContainerDied","Data":"16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0"} Jan 31 07:54:30 crc kubenswrapper[4826]: I0131 07:54:30.001608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" event={"ID":"5854fd2e-9542-4dda-80f2-8ebf777c620b","Type":"ContainerStarted","Data":"e5e226e62f7fe9a0de42e579d8cb3b81c2b34fa853644f261287d5cd1d286d6b"} Jan 31 07:54:30 crc kubenswrapper[4826]: I0131 07:54:30.058590 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65776456b6-g6gkf" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 31 07:54:30 crc kubenswrapper[4826]: I0131 07:54:30.058737 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:54:30 crc kubenswrapper[4826]: I0131 07:54:30.518641 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:31 crc kubenswrapper[4826]: I0131 07:54:31.016242 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7235dcfc-73e2-4205-a092-4258b08eabe0","Type":"ContainerStarted","Data":"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6"} Jan 31 07:54:31 crc kubenswrapper[4826]: I0131 07:54:31.020917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerStarted","Data":"c8894ddb73dfeb65241b46621dba5a125701cfb3ef2fef12b805a25a21ca93ef"} Jan 31 07:54:31 crc kubenswrapper[4826]: I0131 07:54:31.023088 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" event={"ID":"5854fd2e-9542-4dda-80f2-8ebf777c620b","Type":"ContainerStarted","Data":"b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58"} Jan 31 07:54:31 crc kubenswrapper[4826]: I0131 07:54:31.024431 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:31 crc kubenswrapper[4826]: I0131 07:54:31.049759 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" podStartSLOduration=3.049734524 podStartE2EDuration="3.049734524s" podCreationTimestamp="2026-01-31 07:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:31.03930977 +0000 UTC m=+1102.893196139" watchObservedRunningTime="2026-01-31 07:54:31.049734524 +0000 UTC m=+1102.903620893" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.032928 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea3eca99-d298-4fa1-9c82-fb58164ff654","Type":"ContainerStarted","Data":"29fd3ff41e9180ce0500f007b0ad64d6fe444e9ac0f56be68be307d9ec893169"} Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.033322 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea3eca99-d298-4fa1-9c82-fb58164ff654","Type":"ContainerStarted","Data":"a0915a22c10a1e1b2d448100a224323ec4c7f0fea512d37b0ac7c5c0716ca7d7"} Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.037231 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7235dcfc-73e2-4205-a092-4258b08eabe0","Type":"ContainerStarted","Data":"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe"} Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.037282 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api-log" containerID="cri-o://bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6" gracePeriod=30 Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.037293 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.037374 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api" containerID="cri-o://5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe" gracePeriod=30 Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.055875 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.716604157 podStartE2EDuration="4.0558602s" podCreationTimestamp="2026-01-31 07:54:28 +0000 UTC" firstStartedPulling="2026-01-31 07:54:29.10945434 +0000 UTC m=+1100.963340699" lastFinishedPulling="2026-01-31 07:54:30.448710383 +0000 UTC m=+1102.302596742" observedRunningTime="2026-01-31 07:54:32.052372362 +0000 UTC m=+1103.906258721" watchObservedRunningTime="2026-01-31 07:54:32.0558602 +0000 UTC m=+1103.909746559" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.098348 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.098322308 podStartE2EDuration="4.098322308s" podCreationTimestamp="2026-01-31 07:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:32.088737927 +0000 UTC m=+1103.942624306" watchObservedRunningTime="2026-01-31 07:54:32.098322308 +0000 UTC m=+1103.952208697" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.788505 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919600 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxq5d\" (UniqueName: \"kubernetes.io/projected/7235dcfc-73e2-4205-a092-4258b08eabe0-kube-api-access-dxq5d\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919712 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7235dcfc-73e2-4205-a092-4258b08eabe0-etc-machine-id\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919766 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-scripts\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919817 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7235dcfc-73e2-4205-a092-4258b08eabe0-logs\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919834 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-combined-ca-bundle\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919909 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data-custom\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.919931 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data\") pod \"7235dcfc-73e2-4205-a092-4258b08eabe0\" (UID: \"7235dcfc-73e2-4205-a092-4258b08eabe0\") " Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.921778 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7235dcfc-73e2-4205-a092-4258b08eabe0-logs" (OuterVolumeSpecName: "logs") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.921805 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7235dcfc-73e2-4205-a092-4258b08eabe0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.926392 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-scripts" (OuterVolumeSpecName: "scripts") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.926724 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7235dcfc-73e2-4205-a092-4258b08eabe0-kube-api-access-dxq5d" (OuterVolumeSpecName: "kube-api-access-dxq5d") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "kube-api-access-dxq5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.928386 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.969314 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:32 crc kubenswrapper[4826]: I0131 07:54:32.986260 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data" (OuterVolumeSpecName: "config-data") pod "7235dcfc-73e2-4205-a092-4258b08eabe0" (UID: "7235dcfc-73e2-4205-a092-4258b08eabe0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021325 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxq5d\" (UniqueName: \"kubernetes.io/projected/7235dcfc-73e2-4205-a092-4258b08eabe0-kube-api-access-dxq5d\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021546 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7235dcfc-73e2-4205-a092-4258b08eabe0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021606 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021662 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7235dcfc-73e2-4205-a092-4258b08eabe0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021712 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021767 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.021827 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7235dcfc-73e2-4205-a092-4258b08eabe0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.047461 4826 generic.go:334] "Generic (PLEG): container finished" podID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerID="5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe" exitCode=0 Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.048667 4826 generic.go:334] "Generic (PLEG): container finished" podID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerID="bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6" exitCode=143 Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.048564 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7235dcfc-73e2-4205-a092-4258b08eabe0","Type":"ContainerDied","Data":"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe"} Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.048891 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7235dcfc-73e2-4205-a092-4258b08eabe0","Type":"ContainerDied","Data":"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6"} Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.048981 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7235dcfc-73e2-4205-a092-4258b08eabe0","Type":"ContainerDied","Data":"b47a8a0c7bd729190e55741bac08232f53bbd251be8a9d5f078939818bcf411e"} Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.049075 4826 scope.go:117] "RemoveContainer" containerID="5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.048636 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.053846 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerStarted","Data":"755773ee0355f116f23699260225128b744e904378af4b303b43c53a82a47192"} Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.080763 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.411333927 podStartE2EDuration="6.080744185s" podCreationTimestamp="2026-01-31 07:54:27 +0000 UTC" firstStartedPulling="2026-01-31 07:54:27.875413546 +0000 UTC m=+1099.729299895" lastFinishedPulling="2026-01-31 07:54:32.544823794 +0000 UTC m=+1104.398710153" observedRunningTime="2026-01-31 07:54:33.07777444 +0000 UTC m=+1104.931660819" watchObservedRunningTime="2026-01-31 07:54:33.080744185 +0000 UTC m=+1104.934630544" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.095805 4826 scope.go:117] "RemoveContainer" containerID="bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.100033 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.106652 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.132265 4826 scope.go:117] "RemoveContainer" containerID="5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe" Jan 31 07:54:33 crc kubenswrapper[4826]: E0131 07:54:33.133897 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe\": container with ID starting with 5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe not found: ID does not exist" containerID="5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.133937 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe"} err="failed to get container status \"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe\": rpc error: code = NotFound desc = could not find container \"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe\": container with ID starting with 5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe not found: ID does not exist" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.133965 4826 scope.go:117] "RemoveContainer" containerID="bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6" Jan 31 07:54:33 crc kubenswrapper[4826]: E0131 07:54:33.138224 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6\": container with ID starting with bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6 not found: ID does not exist" containerID="bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.138257 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6"} err="failed to get container status \"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6\": rpc error: code = NotFound desc = could not find container \"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6\": container with ID starting with bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6 not found: ID does not exist" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.138294 4826 scope.go:117] "RemoveContainer" containerID="5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.140923 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe"} err="failed to get container status \"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe\": rpc error: code = NotFound desc = could not find container \"5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe\": container with ID starting with 5dfc2da1209936685399057680588392452813dcc59b7e6ea50cc601d26d45fe not found: ID does not exist" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.140961 4826 scope.go:117] "RemoveContainer" containerID="bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.141060 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:33 crc kubenswrapper[4826]: E0131 07:54:33.141527 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api-log" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.141543 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api-log" Jan 31 07:54:33 crc kubenswrapper[4826]: E0131 07:54:33.141561 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.141569 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.141774 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api-log" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.141791 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" containerName="cinder-api" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.143465 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.149243 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6"} err="failed to get container status \"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6\": rpc error: code = NotFound desc = could not find container \"bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6\": container with ID starting with bc695c0165a8a09c68d33d1456fbd54f2fd7a7e8642c660113b983beba7067d6 not found: ID does not exist" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.152319 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.152463 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.155107 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.155371 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224031 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e16f67-d80b-4d2f-9bf0-0ce081212368-logs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224081 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-config-data\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224176 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2e16f67-d80b-4d2f-9bf0-0ce081212368-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224245 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224311 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224389 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ts6q\" (UniqueName: \"kubernetes.io/projected/b2e16f67-d80b-4d2f-9bf0-0ce081212368-kube-api-access-4ts6q\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224419 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.224463 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-scripts\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326717 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e16f67-d80b-4d2f-9bf0-0ce081212368-logs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326771 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326801 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-config-data\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2e16f67-d80b-4d2f-9bf0-0ce081212368-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.326989 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ts6q\" (UniqueName: \"kubernetes.io/projected/b2e16f67-d80b-4d2f-9bf0-0ce081212368-kube-api-access-4ts6q\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.327020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.327050 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-scripts\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.327950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2e16f67-d80b-4d2f-9bf0-0ce081212368-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.328324 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e16f67-d80b-4d2f-9bf0-0ce081212368-logs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.331521 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.332351 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-scripts\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.333622 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.337058 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-config-data\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.338326 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.352833 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e16f67-d80b-4d2f-9bf0-0ce081212368-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.355676 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ts6q\" (UniqueName: \"kubernetes.io/projected/b2e16f67-d80b-4d2f-9bf0-0ce081212368-kube-api-access-4ts6q\") pod \"cinder-api-0\" (UID: \"b2e16f67-d80b-4d2f-9bf0-0ce081212368\") " pod="openstack/cinder-api-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.482083 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 07:54:33 crc kubenswrapper[4826]: I0131 07:54:33.542387 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.020170 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 07:54:34 crc kubenswrapper[4826]: W0131 07:54:34.024000 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e16f67_d80b_4d2f_9bf0_0ce081212368.slice/crio-b2cb8021d79e444e81cf3cbea6e0f22beeb3097d5d6c18220e4ed39ca730c1a5 WatchSource:0}: Error finding container b2cb8021d79e444e81cf3cbea6e0f22beeb3097d5d6c18220e4ed39ca730c1a5: Status 404 returned error can't find the container with id b2cb8021d79e444e81cf3cbea6e0f22beeb3097d5d6c18220e4ed39ca730c1a5 Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.075588 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2e16f67-d80b-4d2f-9bf0-0ce081212368","Type":"ContainerStarted","Data":"b2cb8021d79e444e81cf3cbea6e0f22beeb3097d5d6c18220e4ed39ca730c1a5"} Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.076719 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.647883 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-k75dt"] Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.649179 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.675782 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k75dt"] Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.756676 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3626f21a-c324-4d3f-9aad-3248a07896da-operator-scripts\") pod \"nova-api-db-create-k75dt\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.756809 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s4s8\" (UniqueName: \"kubernetes.io/projected/3626f21a-c324-4d3f-9aad-3248a07896da-kube-api-access-6s4s8\") pod \"nova-api-db-create-k75dt\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.773012 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dl84z"] Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.774421 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.862287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzr7h\" (UniqueName: \"kubernetes.io/projected/f81eab18-a2c9-4435-aaad-98f5c7666fb2-kube-api-access-jzr7h\") pod \"nova-cell0-db-create-dl84z\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.862363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81eab18-a2c9-4435-aaad-98f5c7666fb2-operator-scripts\") pod \"nova-cell0-db-create-dl84z\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.862432 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3626f21a-c324-4d3f-9aad-3248a07896da-operator-scripts\") pod \"nova-api-db-create-k75dt\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.862538 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s4s8\" (UniqueName: \"kubernetes.io/projected/3626f21a-c324-4d3f-9aad-3248a07896da-kube-api-access-6s4s8\") pod \"nova-api-db-create-k75dt\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.863764 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3626f21a-c324-4d3f-9aad-3248a07896da-operator-scripts\") pod \"nova-api-db-create-k75dt\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.904281 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s4s8\" (UniqueName: \"kubernetes.io/projected/3626f21a-c324-4d3f-9aad-3248a07896da-kube-api-access-6s4s8\") pod \"nova-api-db-create-k75dt\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.905518 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7235dcfc-73e2-4205-a092-4258b08eabe0" path="/var/lib/kubelet/pods/7235dcfc-73e2-4205-a092-4258b08eabe0/volumes" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.916747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dl84z"] Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.972759 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzr7h\" (UniqueName: \"kubernetes.io/projected/f81eab18-a2c9-4435-aaad-98f5c7666fb2-kube-api-access-jzr7h\") pod \"nova-cell0-db-create-dl84z\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.976190 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81eab18-a2c9-4435-aaad-98f5c7666fb2-operator-scripts\") pod \"nova-cell0-db-create-dl84z\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:34 crc kubenswrapper[4826]: I0131 07:54:34.983470 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81eab18-a2c9-4435-aaad-98f5c7666fb2-operator-scripts\") pod \"nova-cell0-db-create-dl84z\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.022204 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-42b8-account-create-update-tm77r"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.027667 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.031092 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.039291 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-42b8-account-create-update-tm77r"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.039909 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzr7h\" (UniqueName: \"kubernetes.io/projected/f81eab18-a2c9-4435-aaad-98f5c7666fb2-kube-api-access-jzr7h\") pod \"nova-cell0-db-create-dl84z\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.062241 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.078322 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv49z\" (UniqueName: \"kubernetes.io/projected/24c647ed-ef58-46b5-a994-0cff67c161cc-kube-api-access-xv49z\") pod \"nova-api-42b8-account-create-update-tm77r\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.078407 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24c647ed-ef58-46b5-a994-0cff67c161cc-operator-scripts\") pod \"nova-api-42b8-account-create-update-tm77r\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.081005 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bcjr8"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.082263 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.145693 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2e16f67-d80b-4d2f-9bf0-0ce081212368","Type":"ContainerStarted","Data":"6316c1350728127ce2432484ba34712b9976678e20d84fd8870978b699c4c149"} Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.148689 4826 generic.go:334] "Generic (PLEG): container finished" podID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerID="9464619301db5a0a09586180fbf9f2778935545892902aad640d046b21fb8b23" exitCode=137 Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.151250 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65776456b6-g6gkf" event={"ID":"e4282d7e-76a0-493b-b2c6-0d954d4bed7a","Type":"ContainerDied","Data":"9464619301db5a0a09586180fbf9f2778935545892902aad640d046b21fb8b23"} Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.161417 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bcjr8"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.170285 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5029-account-create-update-h2wlb"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.174111 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.179750 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b55564-5b4c-48e4-958c-33a815964af3-operator-scripts\") pod \"nova-cell1-db-create-bcjr8\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.179817 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv49z\" (UniqueName: \"kubernetes.io/projected/24c647ed-ef58-46b5-a994-0cff67c161cc-kube-api-access-xv49z\") pod \"nova-api-42b8-account-create-update-tm77r\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.179866 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24c647ed-ef58-46b5-a994-0cff67c161cc-operator-scripts\") pod \"nova-api-42b8-account-create-update-tm77r\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.179892 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4smq\" (UniqueName: \"kubernetes.io/projected/65b55564-5b4c-48e4-958c-33a815964af3-kube-api-access-c4smq\") pod \"nova-cell1-db-create-bcjr8\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.181349 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24c647ed-ef58-46b5-a994-0cff67c161cc-operator-scripts\") pod \"nova-api-42b8-account-create-update-tm77r\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.187511 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.206884 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5029-account-create-update-h2wlb"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.216117 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv49z\" (UniqueName: \"kubernetes.io/projected/24c647ed-ef58-46b5-a994-0cff67c161cc-kube-api-access-xv49z\") pod \"nova-api-42b8-account-create-update-tm77r\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.249061 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c85b-account-create-update-7kdb6"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.250261 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.253006 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.264282 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c85b-account-create-update-7kdb6"] Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.284745 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2deb97a0-5661-4b3f-b0b1-84162e452fa2-operator-scripts\") pod \"nova-cell0-5029-account-create-update-h2wlb\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.284799 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjff\" (UniqueName: \"kubernetes.io/projected/2deb97a0-5661-4b3f-b0b1-84162e452fa2-kube-api-access-rtjff\") pod \"nova-cell0-5029-account-create-update-h2wlb\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.284837 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz746\" (UniqueName: \"kubernetes.io/projected/cd771d42-12ea-494a-805a-90133b43e0c3-kube-api-access-mz746\") pod \"nova-cell1-c85b-account-create-update-7kdb6\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.284884 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd771d42-12ea-494a-805a-90133b43e0c3-operator-scripts\") pod \"nova-cell1-c85b-account-create-update-7kdb6\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.284916 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b55564-5b4c-48e4-958c-33a815964af3-operator-scripts\") pod \"nova-cell1-db-create-bcjr8\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.285006 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4smq\" (UniqueName: \"kubernetes.io/projected/65b55564-5b4c-48e4-958c-33a815964af3-kube-api-access-c4smq\") pod \"nova-cell1-db-create-bcjr8\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.286924 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b55564-5b4c-48e4-958c-33a815964af3-operator-scripts\") pod \"nova-cell1-db-create-bcjr8\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.287099 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.308388 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4smq\" (UniqueName: \"kubernetes.io/projected/65b55564-5b4c-48e4-958c-33a815964af3-kube-api-access-c4smq\") pod \"nova-cell1-db-create-bcjr8\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.386472 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.386470 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd771d42-12ea-494a-805a-90133b43e0c3-operator-scripts\") pod \"nova-cell1-c85b-account-create-update-7kdb6\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.386754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2deb97a0-5661-4b3f-b0b1-84162e452fa2-operator-scripts\") pod \"nova-cell0-5029-account-create-update-h2wlb\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.386795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjff\" (UniqueName: \"kubernetes.io/projected/2deb97a0-5661-4b3f-b0b1-84162e452fa2-kube-api-access-rtjff\") pod \"nova-cell0-5029-account-create-update-h2wlb\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.386850 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz746\" (UniqueName: \"kubernetes.io/projected/cd771d42-12ea-494a-805a-90133b43e0c3-kube-api-access-mz746\") pod \"nova-cell1-c85b-account-create-update-7kdb6\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.388261 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd771d42-12ea-494a-805a-90133b43e0c3-operator-scripts\") pod \"nova-cell1-c85b-account-create-update-7kdb6\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.394917 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2deb97a0-5661-4b3f-b0b1-84162e452fa2-operator-scripts\") pod \"nova-cell0-5029-account-create-update-h2wlb\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.412567 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.420252 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz746\" (UniqueName: \"kubernetes.io/projected/cd771d42-12ea-494a-805a-90133b43e0c3-kube-api-access-mz746\") pod \"nova-cell1-c85b-account-create-update-7kdb6\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.480957 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.489409 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.547883 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjff\" (UniqueName: \"kubernetes.io/projected/2deb97a0-5661-4b3f-b0b1-84162e452fa2-kube-api-access-rtjff\") pod \"nova-cell0-5029-account-create-update-h2wlb\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594560 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-logs\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594669 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-tls-certs\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594716 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-secret-key\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594782 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-scripts\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594833 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-combined-ca-bundle\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594860 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qwrd\" (UniqueName: \"kubernetes.io/projected/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-kube-api-access-5qwrd\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.594945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-config-data\") pod \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\" (UID: \"e4282d7e-76a0-493b-b2c6-0d954d4bed7a\") " Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.600538 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-logs" (OuterVolumeSpecName: "logs") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.604162 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.614137 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-kube-api-access-5qwrd" (OuterVolumeSpecName: "kube-api-access-5qwrd") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "kube-api-access-5qwrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.676809 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-config-data" (OuterVolumeSpecName: "config-data") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.697778 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.697806 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qwrd\" (UniqueName: \"kubernetes.io/projected/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-kube-api-access-5qwrd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.697815 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.697823 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.707206 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.726727 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-scripts" (OuterVolumeSpecName: "scripts") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.742697 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4282d7e-76a0-493b-b2c6-0d954d4bed7a" (UID: "e4282d7e-76a0-493b-b2c6-0d954d4bed7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.768452 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.799135 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.799163 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.799175 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4282d7e-76a0-493b-b2c6-0d954d4bed7a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:35 crc kubenswrapper[4826]: W0131 07:54:35.851030 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3626f21a_c324_4d3f_9aad_3248a07896da.slice/crio-a4e0527e616b2e59d031eb93ef1de72993befb89e4bf14aebab764c5c9d6785d WatchSource:0}: Error finding container a4e0527e616b2e59d031eb93ef1de72993befb89e4bf14aebab764c5c9d6785d: Status 404 returned error can't find the container with id a4e0527e616b2e59d031eb93ef1de72993befb89e4bf14aebab764c5c9d6785d Jan 31 07:54:35 crc kubenswrapper[4826]: I0131 07:54:35.853145 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k75dt"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.119500 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dl84z"] Jan 31 07:54:36 crc kubenswrapper[4826]: W0131 07:54:36.140342 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf81eab18_a2c9_4435_aaad_98f5c7666fb2.slice/crio-e41774d6f95b51689600cf85d4478921b27591f109a015cb850642d26660bf23 WatchSource:0}: Error finding container e41774d6f95b51689600cf85d4478921b27591f109a015cb850642d26660bf23: Status 404 returned error can't find the container with id e41774d6f95b51689600cf85d4478921b27591f109a015cb850642d26660bf23 Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.194331 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k75dt" event={"ID":"3626f21a-c324-4d3f-9aad-3248a07896da","Type":"ContainerStarted","Data":"bbc42394c4dd588b491417a8215cd6b56107c6f164776f0828370c82e7beaea1"} Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.194382 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k75dt" event={"ID":"3626f21a-c324-4d3f-9aad-3248a07896da","Type":"ContainerStarted","Data":"a4e0527e616b2e59d031eb93ef1de72993befb89e4bf14aebab764c5c9d6785d"} Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.207083 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2e16f67-d80b-4d2f-9bf0-0ce081212368","Type":"ContainerStarted","Data":"c1dc15047edf8bf6eb05ef7594de551e5eda01dc674929f4b9c76e15100376b6"} Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.207211 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.227216 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dl84z" event={"ID":"f81eab18-a2c9-4435-aaad-98f5c7666fb2","Type":"ContainerStarted","Data":"e41774d6f95b51689600cf85d4478921b27591f109a015cb850642d26660bf23"} Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.244164 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-k75dt" podStartSLOduration=2.2441454419999998 podStartE2EDuration="2.244145442s" podCreationTimestamp="2026-01-31 07:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:36.22539113 +0000 UTC m=+1108.079277489" watchObservedRunningTime="2026-01-31 07:54:36.244145442 +0000 UTC m=+1108.098031801" Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.244911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65776456b6-g6gkf" event={"ID":"e4282d7e-76a0-493b-b2c6-0d954d4bed7a","Type":"ContainerDied","Data":"f2158499c51fafb01f7ae88f36a47962060ab8785eed124c0b4a9a9cc6254f3c"} Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.248186 4826 scope.go:117] "RemoveContainer" containerID="dddfa3da3163de69516aa5a255f94e47556a43a50db12dac45b8f563367d6e90" Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.245007 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65776456b6-g6gkf" Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.286830 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.28680627 podStartE2EDuration="3.28680627s" podCreationTimestamp="2026-01-31 07:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:36.248071323 +0000 UTC m=+1108.101957682" watchObservedRunningTime="2026-01-31 07:54:36.28680627 +0000 UTC m=+1108.140692629" Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.331274 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bcjr8"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.348631 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c85b-account-create-update-7kdb6"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.385234 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-42b8-account-create-update-tm77r"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.417851 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65776456b6-g6gkf"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.429670 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65776456b6-g6gkf"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.531559 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5029-account-create-update-h2wlb"] Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.561097 4826 scope.go:117] "RemoveContainer" containerID="9464619301db5a0a09586180fbf9f2778935545892902aad640d046b21fb8b23" Jan 31 07:54:36 crc kubenswrapper[4826]: I0131 07:54:36.823464 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" path="/var/lib/kubelet/pods/e4282d7e-76a0-493b-b2c6-0d954d4bed7a/volumes" Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.291245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" event={"ID":"cd771d42-12ea-494a-805a-90133b43e0c3","Type":"ContainerStarted","Data":"ff4d1fbd5b1b5701a4d74c3fb7fe2d270fa59e6988caf318f4b6fcd175da6b0d"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.291608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" event={"ID":"cd771d42-12ea-494a-805a-90133b43e0c3","Type":"ContainerStarted","Data":"528f6980ccba4091e56386cb2f2ab3ea8239097ec0b42093787f9a5dd29c44c5"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.313339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bcjr8" event={"ID":"65b55564-5b4c-48e4-958c-33a815964af3","Type":"ContainerStarted","Data":"d39e722ecb21836ee11f3d21c178c1e97d5c361449c5703da2fc3cce46eab4c2"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.317444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-42b8-account-create-update-tm77r" event={"ID":"24c647ed-ef58-46b5-a994-0cff67c161cc","Type":"ContainerStarted","Data":"18f0b800ce8addb3708e02e5fe72bfa09726ffc4b63bb36cdb5098bd2d0e3390"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.317495 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-42b8-account-create-update-tm77r" event={"ID":"24c647ed-ef58-46b5-a994-0cff67c161cc","Type":"ContainerStarted","Data":"a16c26900308324fe77b68f60dcb3b7e9fae9bb9f3441e2a9a86e9bd7b29f53d"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.326653 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" event={"ID":"2deb97a0-5661-4b3f-b0b1-84162e452fa2","Type":"ContainerStarted","Data":"e375b0197ffa51a3a7a69f241c55205daa768004278c6e241098c50922fa0923"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.326719 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" event={"ID":"2deb97a0-5661-4b3f-b0b1-84162e452fa2","Type":"ContainerStarted","Data":"9f8b2e0ef2692ebc24a33d274f051c5886162776863e9ed95f1a4a3bcb2337bf"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.329995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dl84z" event={"ID":"f81eab18-a2c9-4435-aaad-98f5c7666fb2","Type":"ContainerStarted","Data":"1b281edfb48cfd659fc2422d95860172600ff946e274e48b862976d33755c91e"} Jan 31 07:54:37 crc kubenswrapper[4826]: I0131 07:54:37.355340 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-dl84z" podStartSLOduration=3.355317607 podStartE2EDuration="3.355317607s" podCreationTimestamp="2026-01-31 07:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:37.344544842 +0000 UTC m=+1109.198431201" watchObservedRunningTime="2026-01-31 07:54:37.355317607 +0000 UTC m=+1109.209203966" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.338760 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bcjr8" event={"ID":"65b55564-5b4c-48e4-958c-33a815964af3","Type":"ContainerStarted","Data":"b65790feb94c49780bfc5da71cf55a52ddc65d6e856154463be13e8c65640638"} Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.349671 4826 generic.go:334] "Generic (PLEG): container finished" podID="3626f21a-c324-4d3f-9aad-3248a07896da" containerID="bbc42394c4dd588b491417a8215cd6b56107c6f164776f0828370c82e7beaea1" exitCode=0 Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.350547 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k75dt" event={"ID":"3626f21a-c324-4d3f-9aad-3248a07896da","Type":"ContainerDied","Data":"bbc42394c4dd588b491417a8215cd6b56107c6f164776f0828370c82e7beaea1"} Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.363247 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bcjr8" podStartSLOduration=4.363225928 podStartE2EDuration="4.363225928s" podCreationTimestamp="2026-01-31 07:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:38.360318455 +0000 UTC m=+1110.214204814" watchObservedRunningTime="2026-01-31 07:54:38.363225928 +0000 UTC m=+1110.217112277" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.389441 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-42b8-account-create-update-tm77r" podStartSLOduration=4.38942284 podStartE2EDuration="4.38942284s" podCreationTimestamp="2026-01-31 07:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:38.37811562 +0000 UTC m=+1110.232001979" watchObservedRunningTime="2026-01-31 07:54:38.38942284 +0000 UTC m=+1110.243309199" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.405237 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" podStartSLOduration=3.405221107 podStartE2EDuration="3.405221107s" podCreationTimestamp="2026-01-31 07:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:38.401540633 +0000 UTC m=+1110.255426992" watchObservedRunningTime="2026-01-31 07:54:38.405221107 +0000 UTC m=+1110.259107466" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.433893 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" podStartSLOduration=3.433868649 podStartE2EDuration="3.433868649s" podCreationTimestamp="2026-01-31 07:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:38.428021173 +0000 UTC m=+1110.281907532" watchObservedRunningTime="2026-01-31 07:54:38.433868649 +0000 UTC m=+1110.287755008" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.565098 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.627367 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-c5gj8"] Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.627660 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerName="dnsmasq-dns" containerID="cri-o://94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362" gracePeriod=10 Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.725110 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 07:54:38 crc kubenswrapper[4826]: I0131 07:54:38.797541 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.362377 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.363384 4826 generic.go:334] "Generic (PLEG): container finished" podID="f81eab18-a2c9-4435-aaad-98f5c7666fb2" containerID="1b281edfb48cfd659fc2422d95860172600ff946e274e48b862976d33755c91e" exitCode=0 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.363434 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dl84z" event={"ID":"f81eab18-a2c9-4435-aaad-98f5c7666fb2","Type":"ContainerDied","Data":"1b281edfb48cfd659fc2422d95860172600ff946e274e48b862976d33755c91e"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.366367 4826 generic.go:334] "Generic (PLEG): container finished" podID="cd771d42-12ea-494a-805a-90133b43e0c3" containerID="ff4d1fbd5b1b5701a4d74c3fb7fe2d270fa59e6988caf318f4b6fcd175da6b0d" exitCode=0 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.366499 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" event={"ID":"cd771d42-12ea-494a-805a-90133b43e0c3","Type":"ContainerDied","Data":"ff4d1fbd5b1b5701a4d74c3fb7fe2d270fa59e6988caf318f4b6fcd175da6b0d"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.368293 4826 generic.go:334] "Generic (PLEG): container finished" podID="65b55564-5b4c-48e4-958c-33a815964af3" containerID="b65790feb94c49780bfc5da71cf55a52ddc65d6e856154463be13e8c65640638" exitCode=0 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.368361 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bcjr8" event={"ID":"65b55564-5b4c-48e4-958c-33a815964af3","Type":"ContainerDied","Data":"b65790feb94c49780bfc5da71cf55a52ddc65d6e856154463be13e8c65640638"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.371510 4826 generic.go:334] "Generic (PLEG): container finished" podID="24c647ed-ef58-46b5-a994-0cff67c161cc" containerID="18f0b800ce8addb3708e02e5fe72bfa09726ffc4b63bb36cdb5098bd2d0e3390" exitCode=0 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.371558 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-42b8-account-create-update-tm77r" event={"ID":"24c647ed-ef58-46b5-a994-0cff67c161cc","Type":"ContainerDied","Data":"18f0b800ce8addb3708e02e5fe72bfa09726ffc4b63bb36cdb5098bd2d0e3390"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.373939 4826 generic.go:334] "Generic (PLEG): container finished" podID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerID="94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362" exitCode=0 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.374075 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" event={"ID":"e3f57b07-aade-4cd7-9ebf-b374396665b7","Type":"ContainerDied","Data":"94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.374097 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" event={"ID":"e3f57b07-aade-4cd7-9ebf-b374396665b7","Type":"ContainerDied","Data":"3f95802fa9c2d6351b7c9562ae46e2f18990ac72a387331f70638c6b70692257"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.374117 4826 scope.go:117] "RemoveContainer" containerID="94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.374215 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-c5gj8" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.381580 4826 generic.go:334] "Generic (PLEG): container finished" podID="2deb97a0-5661-4b3f-b0b1-84162e452fa2" containerID="e375b0197ffa51a3a7a69f241c55205daa768004278c6e241098c50922fa0923" exitCode=0 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.381781 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="cinder-scheduler" containerID="cri-o://a0915a22c10a1e1b2d448100a224323ec4c7f0fea512d37b0ac7c5c0716ca7d7" gracePeriod=30 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.382152 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" event={"ID":"2deb97a0-5661-4b3f-b0b1-84162e452fa2","Type":"ContainerDied","Data":"e375b0197ffa51a3a7a69f241c55205daa768004278c6e241098c50922fa0923"} Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.382373 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="probe" containerID="cri-o://29fd3ff41e9180ce0500f007b0ad64d6fe444e9ac0f56be68be307d9ec893169" gracePeriod=30 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.457440 4826 scope.go:117] "RemoveContainer" containerID="841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.471533 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-sb\") pod \"e3f57b07-aade-4cd7-9ebf-b374396665b7\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.471603 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrrmd\" (UniqueName: \"kubernetes.io/projected/e3f57b07-aade-4cd7-9ebf-b374396665b7-kube-api-access-jrrmd\") pod \"e3f57b07-aade-4cd7-9ebf-b374396665b7\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.471679 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-nb\") pod \"e3f57b07-aade-4cd7-9ebf-b374396665b7\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.471749 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-config\") pod \"e3f57b07-aade-4cd7-9ebf-b374396665b7\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.471772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-dns-svc\") pod \"e3f57b07-aade-4cd7-9ebf-b374396665b7\" (UID: \"e3f57b07-aade-4cd7-9ebf-b374396665b7\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.485796 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f57b07-aade-4cd7-9ebf-b374396665b7-kube-api-access-jrrmd" (OuterVolumeSpecName: "kube-api-access-jrrmd") pod "e3f57b07-aade-4cd7-9ebf-b374396665b7" (UID: "e3f57b07-aade-4cd7-9ebf-b374396665b7"). InnerVolumeSpecName "kube-api-access-jrrmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.521347 4826 scope.go:117] "RemoveContainer" containerID="94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362" Jan 31 07:54:39 crc kubenswrapper[4826]: E0131 07:54:39.522321 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362\": container with ID starting with 94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362 not found: ID does not exist" containerID="94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.522354 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362"} err="failed to get container status \"94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362\": rpc error: code = NotFound desc = could not find container \"94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362\": container with ID starting with 94b18e8e8d4be8b3db56ba50737115d0728a606a17639b3d9e51a46cd703c362 not found: ID does not exist" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.522379 4826 scope.go:117] "RemoveContainer" containerID="841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46" Jan 31 07:54:39 crc kubenswrapper[4826]: E0131 07:54:39.522690 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46\": container with ID starting with 841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46 not found: ID does not exist" containerID="841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.522710 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46"} err="failed to get container status \"841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46\": rpc error: code = NotFound desc = could not find container \"841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46\": container with ID starting with 841a501e10ace4bfde1565ac372b39983c2a36ac6ecaff89fed50ed556b0bc46 not found: ID does not exist" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.534632 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3f57b07-aade-4cd7-9ebf-b374396665b7" (UID: "e3f57b07-aade-4cd7-9ebf-b374396665b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.544980 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3f57b07-aade-4cd7-9ebf-b374396665b7" (UID: "e3f57b07-aade-4cd7-9ebf-b374396665b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.579307 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.579342 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrrmd\" (UniqueName: \"kubernetes.io/projected/e3f57b07-aade-4cd7-9ebf-b374396665b7-kube-api-access-jrrmd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.579355 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.580558 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3f57b07-aade-4cd7-9ebf-b374396665b7" (UID: "e3f57b07-aade-4cd7-9ebf-b374396665b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.609375 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-config" (OuterVolumeSpecName: "config") pod "e3f57b07-aade-4cd7-9ebf-b374396665b7" (UID: "e3f57b07-aade-4cd7-9ebf-b374396665b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.665046 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.665450 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-central-agent" containerID="cri-o://b99c0412e30eee04b65ba5f993f1ad996bf373e19499c1e2a3ace1b97544888b" gracePeriod=30 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.665633 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="proxy-httpd" containerID="cri-o://755773ee0355f116f23699260225128b744e904378af4b303b43c53a82a47192" gracePeriod=30 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.666039 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="sg-core" containerID="cri-o://c8894ddb73dfeb65241b46621dba5a125701cfb3ef2fef12b805a25a21ca93ef" gracePeriod=30 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.666119 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-notification-agent" containerID="cri-o://7dbba72c9c71413ec934c77101c62cebd531e324c4cfe5d4ff980c78b356abd1" gracePeriod=30 Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.683309 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.683573 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f57b07-aade-4cd7-9ebf-b374396665b7-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.717101 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-c5gj8"] Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.728082 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-c5gj8"] Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.736895 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.784500 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3626f21a-c324-4d3f-9aad-3248a07896da-operator-scripts\") pod \"3626f21a-c324-4d3f-9aad-3248a07896da\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.784618 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s4s8\" (UniqueName: \"kubernetes.io/projected/3626f21a-c324-4d3f-9aad-3248a07896da-kube-api-access-6s4s8\") pod \"3626f21a-c324-4d3f-9aad-3248a07896da\" (UID: \"3626f21a-c324-4d3f-9aad-3248a07896da\") " Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.786452 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3626f21a-c324-4d3f-9aad-3248a07896da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3626f21a-c324-4d3f-9aad-3248a07896da" (UID: "3626f21a-c324-4d3f-9aad-3248a07896da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.790880 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3626f21a-c324-4d3f-9aad-3248a07896da-kube-api-access-6s4s8" (OuterVolumeSpecName: "kube-api-access-6s4s8") pod "3626f21a-c324-4d3f-9aad-3248a07896da" (UID: "3626f21a-c324-4d3f-9aad-3248a07896da"). InnerVolumeSpecName "kube-api-access-6s4s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.886233 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s4s8\" (UniqueName: \"kubernetes.io/projected/3626f21a-c324-4d3f-9aad-3248a07896da-kube-api-access-6s4s8\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:39 crc kubenswrapper[4826]: I0131 07:54:39.886262 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3626f21a-c324-4d3f-9aad-3248a07896da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.389746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k75dt" event={"ID":"3626f21a-c324-4d3f-9aad-3248a07896da","Type":"ContainerDied","Data":"a4e0527e616b2e59d031eb93ef1de72993befb89e4bf14aebab764c5c9d6785d"} Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.389783 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e0527e616b2e59d031eb93ef1de72993befb89e4bf14aebab764c5c9d6785d" Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.389854 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k75dt" Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.405187 4826 generic.go:334] "Generic (PLEG): container finished" podID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerID="755773ee0355f116f23699260225128b744e904378af4b303b43c53a82a47192" exitCode=0 Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.405214 4826 generic.go:334] "Generic (PLEG): container finished" podID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerID="c8894ddb73dfeb65241b46621dba5a125701cfb3ef2fef12b805a25a21ca93ef" exitCode=2 Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.405222 4826 generic.go:334] "Generic (PLEG): container finished" podID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerID="b99c0412e30eee04b65ba5f993f1ad996bf373e19499c1e2a3ace1b97544888b" exitCode=0 Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.405273 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerDied","Data":"755773ee0355f116f23699260225128b744e904378af4b303b43c53a82a47192"} Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.405323 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerDied","Data":"c8894ddb73dfeb65241b46621dba5a125701cfb3ef2fef12b805a25a21ca93ef"} Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.405339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerDied","Data":"b99c0412e30eee04b65ba5f993f1ad996bf373e19499c1e2a3ace1b97544888b"} Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.408019 4826 generic.go:334] "Generic (PLEG): container finished" podID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerID="29fd3ff41e9180ce0500f007b0ad64d6fe444e9ac0f56be68be307d9ec893169" exitCode=0 Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.408175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea3eca99-d298-4fa1-9c82-fb58164ff654","Type":"ContainerDied","Data":"29fd3ff41e9180ce0500f007b0ad64d6fe444e9ac0f56be68be307d9ec893169"} Jan 31 07:54:40 crc kubenswrapper[4826]: I0131 07:54:40.840054 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" path="/var/lib/kubelet/pods/e3f57b07-aade-4cd7-9ebf-b374396665b7/volumes" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.141610 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.155801 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.204709 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.258778 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.269222 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.336431 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzr7h\" (UniqueName: \"kubernetes.io/projected/f81eab18-a2c9-4435-aaad-98f5c7666fb2-kube-api-access-jzr7h\") pod \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.336498 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4smq\" (UniqueName: \"kubernetes.io/projected/65b55564-5b4c-48e4-958c-33a815964af3-kube-api-access-c4smq\") pod \"65b55564-5b4c-48e4-958c-33a815964af3\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.336567 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz746\" (UniqueName: \"kubernetes.io/projected/cd771d42-12ea-494a-805a-90133b43e0c3-kube-api-access-mz746\") pod \"cd771d42-12ea-494a-805a-90133b43e0c3\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.336635 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81eab18-a2c9-4435-aaad-98f5c7666fb2-operator-scripts\") pod \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\" (UID: \"f81eab18-a2c9-4435-aaad-98f5c7666fb2\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.336671 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd771d42-12ea-494a-805a-90133b43e0c3-operator-scripts\") pod \"cd771d42-12ea-494a-805a-90133b43e0c3\" (UID: \"cd771d42-12ea-494a-805a-90133b43e0c3\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.336717 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b55564-5b4c-48e4-958c-33a815964af3-operator-scripts\") pod \"65b55564-5b4c-48e4-958c-33a815964af3\" (UID: \"65b55564-5b4c-48e4-958c-33a815964af3\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.337513 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd771d42-12ea-494a-805a-90133b43e0c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd771d42-12ea-494a-805a-90133b43e0c3" (UID: "cd771d42-12ea-494a-805a-90133b43e0c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.337593 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81eab18-a2c9-4435-aaad-98f5c7666fb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f81eab18-a2c9-4435-aaad-98f5c7666fb2" (UID: "f81eab18-a2c9-4435-aaad-98f5c7666fb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.337703 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b55564-5b4c-48e4-958c-33a815964af3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65b55564-5b4c-48e4-958c-33a815964af3" (UID: "65b55564-5b4c-48e4-958c-33a815964af3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.345955 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b55564-5b4c-48e4-958c-33a815964af3-kube-api-access-c4smq" (OuterVolumeSpecName: "kube-api-access-c4smq") pod "65b55564-5b4c-48e4-958c-33a815964af3" (UID: "65b55564-5b4c-48e4-958c-33a815964af3"). InnerVolumeSpecName "kube-api-access-c4smq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.346673 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd771d42-12ea-494a-805a-90133b43e0c3-kube-api-access-mz746" (OuterVolumeSpecName: "kube-api-access-mz746") pod "cd771d42-12ea-494a-805a-90133b43e0c3" (UID: "cd771d42-12ea-494a-805a-90133b43e0c3"). InnerVolumeSpecName "kube-api-access-mz746". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.346906 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81eab18-a2c9-4435-aaad-98f5c7666fb2-kube-api-access-jzr7h" (OuterVolumeSpecName: "kube-api-access-jzr7h") pod "f81eab18-a2c9-4435-aaad-98f5c7666fb2" (UID: "f81eab18-a2c9-4435-aaad-98f5c7666fb2"). InnerVolumeSpecName "kube-api-access-jzr7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.437032 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-42b8-account-create-update-tm77r" event={"ID":"24c647ed-ef58-46b5-a994-0cff67c161cc","Type":"ContainerDied","Data":"a16c26900308324fe77b68f60dcb3b7e9fae9bb9f3441e2a9a86e9bd7b29f53d"} Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.437080 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16c26900308324fe77b68f60dcb3b7e9fae9bb9f3441e2a9a86e9bd7b29f53d" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.437167 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-42b8-account-create-update-tm77r" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.437899 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24c647ed-ef58-46b5-a994-0cff67c161cc-operator-scripts\") pod \"24c647ed-ef58-46b5-a994-0cff67c161cc\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.437931 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtjff\" (UniqueName: \"kubernetes.io/projected/2deb97a0-5661-4b3f-b0b1-84162e452fa2-kube-api-access-rtjff\") pod \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.441441 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c647ed-ef58-46b5-a994-0cff67c161cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24c647ed-ef58-46b5-a994-0cff67c161cc" (UID: "24c647ed-ef58-46b5-a994-0cff67c161cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.444785 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2deb97a0-5661-4b3f-b0b1-84162e452fa2-operator-scripts\") pod \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\" (UID: \"2deb97a0-5661-4b3f-b0b1-84162e452fa2\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.444933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv49z\" (UniqueName: \"kubernetes.io/projected/24c647ed-ef58-46b5-a994-0cff67c161cc-kube-api-access-xv49z\") pod \"24c647ed-ef58-46b5-a994-0cff67c161cc\" (UID: \"24c647ed-ef58-46b5-a994-0cff67c161cc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.445838 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2deb97a0-5661-4b3f-b0b1-84162e452fa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2deb97a0-5661-4b3f-b0b1-84162e452fa2" (UID: "2deb97a0-5661-4b3f-b0b1-84162e452fa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.445917 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzr7h\" (UniqueName: \"kubernetes.io/projected/f81eab18-a2c9-4435-aaad-98f5c7666fb2-kube-api-access-jzr7h\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.451834 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4smq\" (UniqueName: \"kubernetes.io/projected/65b55564-5b4c-48e4-958c-33a815964af3-kube-api-access-c4smq\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.451851 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz746\" (UniqueName: \"kubernetes.io/projected/cd771d42-12ea-494a-805a-90133b43e0c3-kube-api-access-mz746\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.451863 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24c647ed-ef58-46b5-a994-0cff67c161cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.451872 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81eab18-a2c9-4435-aaad-98f5c7666fb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.451882 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd771d42-12ea-494a-805a-90133b43e0c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.451891 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b55564-5b4c-48e4-958c-33a815964af3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.450389 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.447316 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2deb97a0-5661-4b3f-b0b1-84162e452fa2-kube-api-access-rtjff" (OuterVolumeSpecName: "kube-api-access-rtjff") pod "2deb97a0-5661-4b3f-b0b1-84162e452fa2" (UID: "2deb97a0-5661-4b3f-b0b1-84162e452fa2"). InnerVolumeSpecName "kube-api-access-rtjff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.450324 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5029-account-create-update-h2wlb" event={"ID":"2deb97a0-5661-4b3f-b0b1-84162e452fa2","Type":"ContainerDied","Data":"9f8b2e0ef2692ebc24a33d274f051c5886162776863e9ed95f1a4a3bcb2337bf"} Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.452484 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8b2e0ef2692ebc24a33d274f051c5886162776863e9ed95f1a4a3bcb2337bf" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.466309 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c647ed-ef58-46b5-a994-0cff67c161cc-kube-api-access-xv49z" (OuterVolumeSpecName: "kube-api-access-xv49z") pod "24c647ed-ef58-46b5-a994-0cff67c161cc" (UID: "24c647ed-ef58-46b5-a994-0cff67c161cc"). InnerVolumeSpecName "kube-api-access-xv49z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.480654 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dl84z" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.481072 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dl84z" event={"ID":"f81eab18-a2c9-4435-aaad-98f5c7666fb2","Type":"ContainerDied","Data":"e41774d6f95b51689600cf85d4478921b27591f109a015cb850642d26660bf23"} Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.481140 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e41774d6f95b51689600cf85d4478921b27591f109a015cb850642d26660bf23" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.483441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" event={"ID":"cd771d42-12ea-494a-805a-90133b43e0c3","Type":"ContainerDied","Data":"528f6980ccba4091e56386cb2f2ab3ea8239097ec0b42093787f9a5dd29c44c5"} Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.483480 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="528f6980ccba4091e56386cb2f2ab3ea8239097ec0b42093787f9a5dd29c44c5" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.483544 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c85b-account-create-update-7kdb6" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.496604 4826 generic.go:334] "Generic (PLEG): container finished" podID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerID="7dbba72c9c71413ec934c77101c62cebd531e324c4cfe5d4ff980c78b356abd1" exitCode=0 Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.496675 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerDied","Data":"7dbba72c9c71413ec934c77101c62cebd531e324c4cfe5d4ff980c78b356abd1"} Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.497932 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bcjr8" event={"ID":"65b55564-5b4c-48e4-958c-33a815964af3","Type":"ContainerDied","Data":"d39e722ecb21836ee11f3d21c178c1e97d5c361449c5703da2fc3cce46eab4c2"} Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.497953 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39e722ecb21836ee11f3d21c178c1e97d5c361449c5703da2fc3cce46eab4c2" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.499086 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bcjr8" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.553196 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtjff\" (UniqueName: \"kubernetes.io/projected/2deb97a0-5661-4b3f-b0b1-84162e452fa2-kube-api-access-rtjff\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.553229 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2deb97a0-5661-4b3f-b0b1-84162e452fa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.553242 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv49z\" (UniqueName: \"kubernetes.io/projected/24c647ed-ef58-46b5-a994-0cff67c161cc-kube-api-access-xv49z\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.562130 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.654320 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-combined-ca-bundle\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.654488 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-run-httpd\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.654565 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-sg-core-conf-yaml\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.654837 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.654999 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-log-httpd\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.655052 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbz72\" (UniqueName: \"kubernetes.io/projected/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-kube-api-access-pbz72\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.655092 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-scripts\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.655137 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-config-data\") pod \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\" (UID: \"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc\") " Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.655465 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.655918 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.655940 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.659365 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-kube-api-access-pbz72" (OuterVolumeSpecName: "kube-api-access-pbz72") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "kube-api-access-pbz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.671214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-scripts" (OuterVolumeSpecName: "scripts") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.692340 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.731022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.757688 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.757721 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbz72\" (UniqueName: \"kubernetes.io/projected/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-kube-api-access-pbz72\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.757735 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.757747 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.782551 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-config-data" (OuterVolumeSpecName: "config-data") pod "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" (UID: "0591b228-aa43-46bc-ba04-0b5b6ddd4bbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.858843 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:41 crc kubenswrapper[4826]: I0131 07:54:41.978401 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.009137 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78574fb98-ztjmr" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.508792 4826 generic.go:334] "Generic (PLEG): container finished" podID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerID="a0915a22c10a1e1b2d448100a224323ec4c7f0fea512d37b0ac7c5c0716ca7d7" exitCode=0 Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.508881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea3eca99-d298-4fa1-9c82-fb58164ff654","Type":"ContainerDied","Data":"a0915a22c10a1e1b2d448100a224323ec4c7f0fea512d37b0ac7c5c0716ca7d7"} Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.512530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0591b228-aa43-46bc-ba04-0b5b6ddd4bbc","Type":"ContainerDied","Data":"04af8a3095dffda69e9ba31ef7f4b6da7d1adbc026225ff57faa5732e03d89ef"} Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.512588 4826 scope.go:117] "RemoveContainer" containerID="755773ee0355f116f23699260225128b744e904378af4b303b43c53a82a47192" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.512753 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.538786 4826 scope.go:117] "RemoveContainer" containerID="c8894ddb73dfeb65241b46621dba5a125701cfb3ef2fef12b805a25a21ca93ef" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.558042 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.559786 4826 scope.go:117] "RemoveContainer" containerID="7dbba72c9c71413ec934c77101c62cebd531e324c4cfe5d4ff980c78b356abd1" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.572307 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.580165 4826 scope.go:117] "RemoveContainer" containerID="b99c0412e30eee04b65ba5f993f1ad996bf373e19499c1e2a3ace1b97544888b" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589355 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589707 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerName="init" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589724 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerName="init" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589735 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3626f21a-c324-4d3f-9aad-3248a07896da" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589741 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3626f21a-c324-4d3f-9aad-3248a07896da" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589754 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd771d42-12ea-494a-805a-90133b43e0c3" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589760 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd771d42-12ea-494a-805a-90133b43e0c3" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589768 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81eab18-a2c9-4435-aaad-98f5c7666fb2" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589777 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81eab18-a2c9-4435-aaad-98f5c7666fb2" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589783 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589789 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589801 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deb97a0-5661-4b3f-b0b1-84162e452fa2" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589807 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deb97a0-5661-4b3f-b0b1-84162e452fa2" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589816 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c647ed-ef58-46b5-a994-0cff67c161cc" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589821 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c647ed-ef58-46b5-a994-0cff67c161cc" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589831 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="sg-core" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589837 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="sg-core" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589851 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b55564-5b4c-48e4-958c-33a815964af3" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589857 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b55564-5b4c-48e4-958c-33a815964af3" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589867 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="proxy-httpd" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589873 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="proxy-httpd" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589881 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-central-agent" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589887 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-central-agent" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589894 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon-log" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589901 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon-log" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589912 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-notification-agent" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589918 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-notification-agent" Jan 31 07:54:42 crc kubenswrapper[4826]: E0131 07:54:42.589929 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerName="dnsmasq-dns" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.589934 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerName="dnsmasq-dns" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590190 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="sg-core" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590216 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c647ed-ef58-46b5-a994-0cff67c161cc" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590223 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd771d42-12ea-494a-805a-90133b43e0c3" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590234 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-notification-agent" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590241 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="ceilometer-central-agent" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590246 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590256 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81eab18-a2c9-4435-aaad-98f5c7666fb2" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590265 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2deb97a0-5661-4b3f-b0b1-84162e452fa2" containerName="mariadb-account-create-update" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590273 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b55564-5b4c-48e4-958c-33a815964af3" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590281 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3626f21a-c324-4d3f-9aad-3248a07896da" containerName="mariadb-database-create" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590288 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4282d7e-76a0-493b-b2c6-0d954d4bed7a" containerName="horizon-log" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590295 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f57b07-aade-4cd7-9ebf-b374396665b7" containerName="dnsmasq-dns" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.590303 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" containerName="proxy-httpd" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.591713 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.595164 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.595347 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.606803 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672187 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-log-httpd\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672319 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-config-data\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-scripts\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672382 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672428 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672455 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-run-httpd\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.672491 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7g2\" (UniqueName: \"kubernetes.io/projected/40fd80c9-f288-46f2-884a-4719ec88efa7-kube-api-access-mr7g2\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774044 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774117 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-run-httpd\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr7g2\" (UniqueName: \"kubernetes.io/projected/40fd80c9-f288-46f2-884a-4719ec88efa7-kube-api-access-mr7g2\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-log-httpd\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-config-data\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774351 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-scripts\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.774939 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-log-httpd\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.775212 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-run-httpd\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.780227 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.780513 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-config-data\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.781054 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.794691 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-scripts\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.795203 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr7g2\" (UniqueName: \"kubernetes.io/projected/40fd80c9-f288-46f2-884a-4719ec88efa7-kube-api-access-mr7g2\") pod \"ceilometer-0\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " pod="openstack/ceilometer-0" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.826066 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0591b228-aa43-46bc-ba04-0b5b6ddd4bbc" path="/var/lib/kubelet/pods/0591b228-aa43-46bc-ba04-0b5b6ddd4bbc/volumes" Jan 31 07:54:42 crc kubenswrapper[4826]: I0131 07:54:42.963466 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.447705 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:43 crc kubenswrapper[4826]: W0131 07:54:43.483742 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40fd80c9_f288_46f2_884a_4719ec88efa7.slice/crio-71fd30c5bf38bfb442a98e612303c965fd763ed26a73ad4fe91005c6b356206b WatchSource:0}: Error finding container 71fd30c5bf38bfb442a98e612303c965fd763ed26a73ad4fe91005c6b356206b: Status 404 returned error can't find the container with id 71fd30c5bf38bfb442a98e612303c965fd763ed26a73ad4fe91005c6b356206b Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.543711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerStarted","Data":"71fd30c5bf38bfb442a98e612303c965fd763ed26a73ad4fe91005c6b356206b"} Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.559458 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea3eca99-d298-4fa1-9c82-fb58164ff654","Type":"ContainerDied","Data":"559a21d499bf23b1b15042321dcbd95e26f8846fcaa453c6515a5f1876b3eadd"} Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.559502 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559a21d499bf23b1b15042321dcbd95e26f8846fcaa453c6515a5f1876b3eadd" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.593044 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690363 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfx4m\" (UniqueName: \"kubernetes.io/projected/ea3eca99-d298-4fa1-9c82-fb58164ff654-kube-api-access-zfx4m\") pod \"ea3eca99-d298-4fa1-9c82-fb58164ff654\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690487 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea3eca99-d298-4fa1-9c82-fb58164ff654-etc-machine-id\") pod \"ea3eca99-d298-4fa1-9c82-fb58164ff654\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690522 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data-custom\") pod \"ea3eca99-d298-4fa1-9c82-fb58164ff654\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690546 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-scripts\") pod \"ea3eca99-d298-4fa1-9c82-fb58164ff654\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690606 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data\") pod \"ea3eca99-d298-4fa1-9c82-fb58164ff654\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690616 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3eca99-d298-4fa1-9c82-fb58164ff654-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ea3eca99-d298-4fa1-9c82-fb58164ff654" (UID: "ea3eca99-d298-4fa1-9c82-fb58164ff654"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.690734 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-combined-ca-bundle\") pod \"ea3eca99-d298-4fa1-9c82-fb58164ff654\" (UID: \"ea3eca99-d298-4fa1-9c82-fb58164ff654\") " Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.691166 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea3eca99-d298-4fa1-9c82-fb58164ff654-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.700201 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-scripts" (OuterVolumeSpecName: "scripts") pod "ea3eca99-d298-4fa1-9c82-fb58164ff654" (UID: "ea3eca99-d298-4fa1-9c82-fb58164ff654"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.700472 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3eca99-d298-4fa1-9c82-fb58164ff654-kube-api-access-zfx4m" (OuterVolumeSpecName: "kube-api-access-zfx4m") pod "ea3eca99-d298-4fa1-9c82-fb58164ff654" (UID: "ea3eca99-d298-4fa1-9c82-fb58164ff654"). InnerVolumeSpecName "kube-api-access-zfx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.700522 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ea3eca99-d298-4fa1-9c82-fb58164ff654" (UID: "ea3eca99-d298-4fa1-9c82-fb58164ff654"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.778071 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea3eca99-d298-4fa1-9c82-fb58164ff654" (UID: "ea3eca99-d298-4fa1-9c82-fb58164ff654"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.794073 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.794313 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.794382 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.794447 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfx4m\" (UniqueName: \"kubernetes.io/projected/ea3eca99-d298-4fa1-9c82-fb58164ff654-kube-api-access-zfx4m\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.843322 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data" (OuterVolumeSpecName: "config-data") pod "ea3eca99-d298-4fa1-9c82-fb58164ff654" (UID: "ea3eca99-d298-4fa1-9c82-fb58164ff654"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:43 crc kubenswrapper[4826]: I0131 07:54:43.895707 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3eca99-d298-4fa1-9c82-fb58164ff654-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.572407 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerStarted","Data":"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e"} Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.572440 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.609772 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.617497 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.637703 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:44 crc kubenswrapper[4826]: E0131 07:54:44.638083 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="cinder-scheduler" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.638101 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="cinder-scheduler" Jan 31 07:54:44 crc kubenswrapper[4826]: E0131 07:54:44.638116 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="probe" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.638124 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="probe" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.638270 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="cinder-scheduler" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.638292 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" containerName="probe" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.639149 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.641604 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.670161 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.809297 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-scripts\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.809390 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e740eda2-f125-48ae-8083-8023f7e20b41-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.809458 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.809525 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-config-data\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.809565 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r797l\" (UniqueName: \"kubernetes.io/projected/e740eda2-f125-48ae-8083-8023f7e20b41-kube-api-access-r797l\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.809596 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.829229 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3eca99-d298-4fa1-9c82-fb58164ff654" path="/var/lib/kubelet/pods/ea3eca99-d298-4fa1-9c82-fb58164ff654/volumes" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.910989 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e740eda2-f125-48ae-8083-8023f7e20b41-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.911122 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e740eda2-f125-48ae-8083-8023f7e20b41-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.911810 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.911998 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-config-data\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.912132 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r797l\" (UniqueName: \"kubernetes.io/projected/e740eda2-f125-48ae-8083-8023f7e20b41-kube-api-access-r797l\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.912255 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.912422 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-scripts\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.916244 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-scripts\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.917786 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.917947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.918452 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e740eda2-f125-48ae-8083-8023f7e20b41-config-data\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:44 crc kubenswrapper[4826]: I0131 07:54:44.932170 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r797l\" (UniqueName: \"kubernetes.io/projected/e740eda2-f125-48ae-8083-8023f7e20b41-kube-api-access-r797l\") pod \"cinder-scheduler-0\" (UID: \"e740eda2-f125-48ae-8083-8023f7e20b41\") " pod="openstack/cinder-scheduler-0" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.033668 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.443275 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l4j5z"] Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.444627 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.447611 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lr5gx" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.447907 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.448096 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.456423 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l4j5z"] Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.577204 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 07:54:45 crc kubenswrapper[4826]: W0131 07:54:45.579631 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode740eda2_f125_48ae_8083_8023f7e20b41.slice/crio-47eda2849c283deae2e36c871129396a0438a044b9a3cc7b9c04014a5c32783a WatchSource:0}: Error finding container 47eda2849c283deae2e36c871129396a0438a044b9a3cc7b9c04014a5c32783a: Status 404 returned error can't find the container with id 47eda2849c283deae2e36c871129396a0438a044b9a3cc7b9c04014a5c32783a Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.586058 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerStarted","Data":"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459"} Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.586097 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerStarted","Data":"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841"} Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.626737 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-config-data\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.627153 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-scripts\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.627282 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6bp\" (UniqueName: \"kubernetes.io/projected/8baa3438-0c11-4a8c-b397-85247a6252c1-kube-api-access-ss6bp\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.627320 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.728790 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss6bp\" (UniqueName: \"kubernetes.io/projected/8baa3438-0c11-4a8c-b397-85247a6252c1-kube-api-access-ss6bp\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.728832 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.728885 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-config-data\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.728906 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-scripts\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.734356 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-scripts\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.735541 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-config-data\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.742840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.752111 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss6bp\" (UniqueName: \"kubernetes.io/projected/8baa3438-0c11-4a8c-b397-85247a6252c1-kube-api-access-ss6bp\") pod \"nova-cell0-conductor-db-sync-l4j5z\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:45 crc kubenswrapper[4826]: I0131 07:54:45.770246 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:54:46 crc kubenswrapper[4826]: I0131 07:54:46.071799 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 07:54:46 crc kubenswrapper[4826]: I0131 07:54:46.286903 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l4j5z"] Jan 31 07:54:46 crc kubenswrapper[4826]: I0131 07:54:46.609389 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e740eda2-f125-48ae-8083-8023f7e20b41","Type":"ContainerStarted","Data":"796a339967a8afbe914e070aa7947e251ec51e0cead76c20e9d5ee2b36317ca8"} Jan 31 07:54:46 crc kubenswrapper[4826]: I0131 07:54:46.609446 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e740eda2-f125-48ae-8083-8023f7e20b41","Type":"ContainerStarted","Data":"47eda2849c283deae2e36c871129396a0438a044b9a3cc7b9c04014a5c32783a"} Jan 31 07:54:46 crc kubenswrapper[4826]: I0131 07:54:46.618288 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" event={"ID":"8baa3438-0c11-4a8c-b397-85247a6252c1","Type":"ContainerStarted","Data":"19476f6cfd7d2e29c3fca05a9d87137e6630ae8c126f7bd45814672b89d80413"} Jan 31 07:54:47 crc kubenswrapper[4826]: I0131 07:54:47.627572 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e740eda2-f125-48ae-8083-8023f7e20b41","Type":"ContainerStarted","Data":"cfdb2fa8180b310652d9d9a38c84d96cdc8569b27aafd37cec3301917bd575d3"} Jan 31 07:54:47 crc kubenswrapper[4826]: I0131 07:54:47.649002 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.648985846 podStartE2EDuration="3.648985846s" podCreationTimestamp="2026-01-31 07:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:54:47.641676079 +0000 UTC m=+1119.495562438" watchObservedRunningTime="2026-01-31 07:54:47.648985846 +0000 UTC m=+1119.502872205" Jan 31 07:54:48 crc kubenswrapper[4826]: I0131 07:54:48.653946 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerStarted","Data":"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc"} Jan 31 07:54:48 crc kubenswrapper[4826]: I0131 07:54:48.675834 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.467549321 podStartE2EDuration="6.675815043s" podCreationTimestamp="2026-01-31 07:54:42 +0000 UTC" firstStartedPulling="2026-01-31 07:54:43.487359035 +0000 UTC m=+1115.341245404" lastFinishedPulling="2026-01-31 07:54:47.695624767 +0000 UTC m=+1119.549511126" observedRunningTime="2026-01-31 07:54:48.67573226 +0000 UTC m=+1120.529618629" watchObservedRunningTime="2026-01-31 07:54:48.675815043 +0000 UTC m=+1120.529701402" Jan 31 07:54:49 crc kubenswrapper[4826]: I0131 07:54:49.663485 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:54:50 crc kubenswrapper[4826]: I0131 07:54:50.034080 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.458522 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.459354 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-central-agent" containerID="cri-o://d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e" gracePeriod=30 Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.459508 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="proxy-httpd" containerID="cri-o://1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc" gracePeriod=30 Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.459612 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-notification-agent" containerID="cri-o://40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841" gracePeriod=30 Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.459582 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="sg-core" containerID="cri-o://81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459" gracePeriod=30 Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.726985 4826 generic.go:334] "Generic (PLEG): container finished" podID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerID="1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc" exitCode=0 Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.727299 4826 generic.go:334] "Generic (PLEG): container finished" podID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerID="81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459" exitCode=2 Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.727076 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerDied","Data":"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc"} Jan 31 07:54:54 crc kubenswrapper[4826]: I0131 07:54:54.727451 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerDied","Data":"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459"} Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.235960 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.464696 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529022 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-sg-core-conf-yaml\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529078 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr7g2\" (UniqueName: \"kubernetes.io/projected/40fd80c9-f288-46f2-884a-4719ec88efa7-kube-api-access-mr7g2\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529139 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-config-data\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-scripts\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529194 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-combined-ca-bundle\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529220 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-run-httpd\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.529257 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-log-httpd\") pod \"40fd80c9-f288-46f2-884a-4719ec88efa7\" (UID: \"40fd80c9-f288-46f2-884a-4719ec88efa7\") " Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.530036 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.530171 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.533483 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fd80c9-f288-46f2-884a-4719ec88efa7-kube-api-access-mr7g2" (OuterVolumeSpecName: "kube-api-access-mr7g2") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "kube-api-access-mr7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.533487 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-scripts" (OuterVolumeSpecName: "scripts") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.559218 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.600277 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.630878 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.630910 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.630924 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr7g2\" (UniqueName: \"kubernetes.io/projected/40fd80c9-f288-46f2-884a-4719ec88efa7-kube-api-access-mr7g2\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.630933 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.630941 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.630949 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40fd80c9-f288-46f2-884a-4719ec88efa7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.664960 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-config-data" (OuterVolumeSpecName: "config-data") pod "40fd80c9-f288-46f2-884a-4719ec88efa7" (UID: "40fd80c9-f288-46f2-884a-4719ec88efa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.732744 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd80c9-f288-46f2-884a-4719ec88efa7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.740942 4826 generic.go:334] "Generic (PLEG): container finished" podID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerID="40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841" exitCode=0 Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.741057 4826 generic.go:334] "Generic (PLEG): container finished" podID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerID="d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e" exitCode=0 Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.741132 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerDied","Data":"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841"} Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.741172 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerDied","Data":"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e"} Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.741196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40fd80c9-f288-46f2-884a-4719ec88efa7","Type":"ContainerDied","Data":"71fd30c5bf38bfb442a98e612303c965fd763ed26a73ad4fe91005c6b356206b"} Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.741231 4826 scope.go:117] "RemoveContainer" containerID="1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.741650 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.747947 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" event={"ID":"8baa3438-0c11-4a8c-b397-85247a6252c1","Type":"ContainerStarted","Data":"142e6fc54b1d556e8615b29228030913bfa9bf6e5e794e62616c9c16b9179162"} Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.781733 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" podStartSLOduration=1.834539846 podStartE2EDuration="10.781713786s" podCreationTimestamp="2026-01-31 07:54:45 +0000 UTC" firstStartedPulling="2026-01-31 07:54:46.290195507 +0000 UTC m=+1118.144081866" lastFinishedPulling="2026-01-31 07:54:55.237369447 +0000 UTC m=+1127.091255806" observedRunningTime="2026-01-31 07:54:55.781229432 +0000 UTC m=+1127.635115831" watchObservedRunningTime="2026-01-31 07:54:55.781713786 +0000 UTC m=+1127.635600145" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.786201 4826 scope.go:117] "RemoveContainer" containerID="81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.810220 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.831325 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.833441 4826 scope.go:117] "RemoveContainer" containerID="40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.841995 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.842499 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="proxy-httpd" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842523 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="proxy-httpd" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.842539 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-notification-agent" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842546 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-notification-agent" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.842560 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="sg-core" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842567 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="sg-core" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.842587 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-central-agent" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842593 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-central-agent" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842779 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-notification-agent" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842794 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="sg-core" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842805 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="proxy-httpd" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.842817 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" containerName="ceilometer-central-agent" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.844509 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.848351 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.851477 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.855575 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.885995 4826 scope.go:117] "RemoveContainer" containerID="d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.912119 4826 scope.go:117] "RemoveContainer" containerID="1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.913049 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc\": container with ID starting with 1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc not found: ID does not exist" containerID="1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.913086 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc"} err="failed to get container status \"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc\": rpc error: code = NotFound desc = could not find container \"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc\": container with ID starting with 1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.913115 4826 scope.go:117] "RemoveContainer" containerID="81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.917109 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459\": container with ID starting with 81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459 not found: ID does not exist" containerID="81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.917153 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459"} err="failed to get container status \"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459\": rpc error: code = NotFound desc = could not find container \"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459\": container with ID starting with 81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459 not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.917178 4826 scope.go:117] "RemoveContainer" containerID="40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.917530 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841\": container with ID starting with 40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841 not found: ID does not exist" containerID="40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.917554 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841"} err="failed to get container status \"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841\": rpc error: code = NotFound desc = could not find container \"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841\": container with ID starting with 40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841 not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.917566 4826 scope.go:117] "RemoveContainer" containerID="d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e" Jan 31 07:54:55 crc kubenswrapper[4826]: E0131 07:54:55.921056 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e\": container with ID starting with d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e not found: ID does not exist" containerID="d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921086 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e"} err="failed to get container status \"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e\": rpc error: code = NotFound desc = could not find container \"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e\": container with ID starting with d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921103 4826 scope.go:117] "RemoveContainer" containerID="1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921571 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc"} err="failed to get container status \"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc\": rpc error: code = NotFound desc = could not find container \"1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc\": container with ID starting with 1a1aea7b0ac3006996a1f57d069577763c57e734abcc348619d0ca352109afbc not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921592 4826 scope.go:117] "RemoveContainer" containerID="81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921771 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459"} err="failed to get container status \"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459\": rpc error: code = NotFound desc = could not find container \"81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459\": container with ID starting with 81d574875bba54b142761d592aa2974e8bb221725d3a9fdf669bf9c7be8bf459 not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921790 4826 scope.go:117] "RemoveContainer" containerID="40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921944 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841"} err="failed to get container status \"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841\": rpc error: code = NotFound desc = could not find container \"40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841\": container with ID starting with 40aea415b84dcf2d253d0dc57f928e3a134c5a6d1b69e1859479bade972d7841 not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.921997 4826 scope.go:117] "RemoveContainer" containerID="d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.922162 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e"} err="failed to get container status \"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e\": rpc error: code = NotFound desc = could not find container \"d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e\": container with ID starting with d8e341c185087c1d2d01d2a2fa22bfbecfac6b308e3c481232ab117a09240a3e not found: ID does not exist" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.935807 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-log-httpd\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.935878 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-scripts\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.936051 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-run-httpd\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.936092 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.936200 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-config-data\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.936443 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw68x\" (UniqueName: \"kubernetes.io/projected/76d96d8b-9ab8-4421-bb39-879e200b1b50-kube-api-access-dw68x\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:55 crc kubenswrapper[4826]: I0131 07:54:55.936491 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.038583 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-config-data\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.038702 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw68x\" (UniqueName: \"kubernetes.io/projected/76d96d8b-9ab8-4421-bb39-879e200b1b50-kube-api-access-dw68x\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.038773 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.038866 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-log-httpd\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.038913 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-scripts\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.039020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-run-httpd\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.039064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.039749 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-run-httpd\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.040018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-log-httpd\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.044665 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.045443 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-scripts\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.045670 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-config-data\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.046742 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.061354 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw68x\" (UniqueName: \"kubernetes.io/projected/76d96d8b-9ab8-4421-bb39-879e200b1b50-kube-api-access-dw68x\") pod \"ceilometer-0\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.186594 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.620495 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.724080 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.771211 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerStarted","Data":"a438a2db4423a4383985e7c82b45a6571e022c8437545325aef10f581975f7a1"} Jan 31 07:54:56 crc kubenswrapper[4826]: I0131 07:54:56.821498 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fd80c9-f288-46f2-884a-4719ec88efa7" path="/var/lib/kubelet/pods/40fd80c9-f288-46f2-884a-4719ec88efa7/volumes" Jan 31 07:54:57 crc kubenswrapper[4826]: I0131 07:54:57.780169 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerStarted","Data":"7401cd8208a9b706d9921f1e190068f8c69e6d52700a94887b26dd779773296e"} Jan 31 07:54:58 crc kubenswrapper[4826]: I0131 07:54:58.794013 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerStarted","Data":"be0409938a732ab957f3e826a708f5b6de8bb9e4f1719b5545c1e34407258ca5"} Jan 31 07:54:58 crc kubenswrapper[4826]: I0131 07:54:58.794397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerStarted","Data":"63fdd3fc84afb9a327778187df782a89e2f3fc687e937bd903123b7c18d4ed83"} Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.817430 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-central-agent" containerID="cri-o://7401cd8208a9b706d9921f1e190068f8c69e6d52700a94887b26dd779773296e" gracePeriod=30 Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.817509 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="proxy-httpd" containerID="cri-o://a923b2533217517d5d42715eb8e1bedce9c6f5f5ba0af7fb223936bc6f3386c4" gracePeriod=30 Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.817572 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="sg-core" containerID="cri-o://be0409938a732ab957f3e826a708f5b6de8bb9e4f1719b5545c1e34407258ca5" gracePeriod=30 Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.817589 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-notification-agent" containerID="cri-o://63fdd3fc84afb9a327778187df782a89e2f3fc687e937bd903123b7c18d4ed83" gracePeriod=30 Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.823189 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.823358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerStarted","Data":"a923b2533217517d5d42715eb8e1bedce9c6f5f5ba0af7fb223936bc6f3386c4"} Jan 31 07:55:00 crc kubenswrapper[4826]: I0131 07:55:00.845313 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.065703166 podStartE2EDuration="5.845295048s" podCreationTimestamp="2026-01-31 07:54:55 +0000 UTC" firstStartedPulling="2026-01-31 07:54:56.632616259 +0000 UTC m=+1128.486502618" lastFinishedPulling="2026-01-31 07:55:00.412208121 +0000 UTC m=+1132.266094500" observedRunningTime="2026-01-31 07:55:00.837111886 +0000 UTC m=+1132.690998255" watchObservedRunningTime="2026-01-31 07:55:00.845295048 +0000 UTC m=+1132.699181407" Jan 31 07:55:01 crc kubenswrapper[4826]: I0131 07:55:01.828490 4826 generic.go:334] "Generic (PLEG): container finished" podID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerID="a923b2533217517d5d42715eb8e1bedce9c6f5f5ba0af7fb223936bc6f3386c4" exitCode=0 Jan 31 07:55:01 crc kubenswrapper[4826]: I0131 07:55:01.828803 4826 generic.go:334] "Generic (PLEG): container finished" podID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerID="be0409938a732ab957f3e826a708f5b6de8bb9e4f1719b5545c1e34407258ca5" exitCode=2 Jan 31 07:55:01 crc kubenswrapper[4826]: I0131 07:55:01.828813 4826 generic.go:334] "Generic (PLEG): container finished" podID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerID="63fdd3fc84afb9a327778187df782a89e2f3fc687e937bd903123b7c18d4ed83" exitCode=0 Jan 31 07:55:01 crc kubenswrapper[4826]: I0131 07:55:01.828572 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerDied","Data":"a923b2533217517d5d42715eb8e1bedce9c6f5f5ba0af7fb223936bc6f3386c4"} Jan 31 07:55:01 crc kubenswrapper[4826]: I0131 07:55:01.828848 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerDied","Data":"be0409938a732ab957f3e826a708f5b6de8bb9e4f1719b5545c1e34407258ca5"} Jan 31 07:55:01 crc kubenswrapper[4826]: I0131 07:55:01.828864 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerDied","Data":"63fdd3fc84afb9a327778187df782a89e2f3fc687e937bd903123b7c18d4ed83"} Jan 31 07:55:03 crc kubenswrapper[4826]: I0131 07:55:03.853298 4826 generic.go:334] "Generic (PLEG): container finished" podID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerID="7401cd8208a9b706d9921f1e190068f8c69e6d52700a94887b26dd779773296e" exitCode=0 Jan 31 07:55:03 crc kubenswrapper[4826]: I0131 07:55:03.853354 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerDied","Data":"7401cd8208a9b706d9921f1e190068f8c69e6d52700a94887b26dd779773296e"} Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.220417 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.407910 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-run-httpd\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408004 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-scripts\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408138 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-log-httpd\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408173 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-sg-core-conf-yaml\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-combined-ca-bundle\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408238 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-config-data\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408308 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408322 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw68x\" (UniqueName: \"kubernetes.io/projected/76d96d8b-9ab8-4421-bb39-879e200b1b50-kube-api-access-dw68x\") pod \"76d96d8b-9ab8-4421-bb39-879e200b1b50\" (UID: \"76d96d8b-9ab8-4421-bb39-879e200b1b50\") " Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.408912 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.409090 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.409112 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76d96d8b-9ab8-4421-bb39-879e200b1b50-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.414581 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d96d8b-9ab8-4421-bb39-879e200b1b50-kube-api-access-dw68x" (OuterVolumeSpecName: "kube-api-access-dw68x") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "kube-api-access-dw68x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.415104 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-scripts" (OuterVolumeSpecName: "scripts") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.435149 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.493588 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.510800 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw68x\" (UniqueName: \"kubernetes.io/projected/76d96d8b-9ab8-4421-bb39-879e200b1b50-kube-api-access-dw68x\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.510843 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.510859 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.510870 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.537876 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-config-data" (OuterVolumeSpecName: "config-data") pod "76d96d8b-9ab8-4421-bb39-879e200b1b50" (UID: "76d96d8b-9ab8-4421-bb39-879e200b1b50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.613187 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d96d8b-9ab8-4421-bb39-879e200b1b50-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.869412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76d96d8b-9ab8-4421-bb39-879e200b1b50","Type":"ContainerDied","Data":"a438a2db4423a4383985e7c82b45a6571e022c8437545325aef10f581975f7a1"} Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.869814 4826 scope.go:117] "RemoveContainer" containerID="a923b2533217517d5d42715eb8e1bedce9c6f5f5ba0af7fb223936bc6f3386c4" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.869534 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.958736 4826 scope.go:117] "RemoveContainer" containerID="be0409938a732ab957f3e826a708f5b6de8bb9e4f1719b5545c1e34407258ca5" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.958861 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.972947 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.987519 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:04 crc kubenswrapper[4826]: E0131 07:55:04.988027 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="sg-core" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988048 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="sg-core" Jan 31 07:55:04 crc kubenswrapper[4826]: E0131 07:55:04.988084 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-notification-agent" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988097 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-notification-agent" Jan 31 07:55:04 crc kubenswrapper[4826]: E0131 07:55:04.988110 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-central-agent" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988121 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-central-agent" Jan 31 07:55:04 crc kubenswrapper[4826]: E0131 07:55:04.988150 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="proxy-httpd" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988162 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="proxy-httpd" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988408 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-notification-agent" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988432 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="ceilometer-central-agent" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988447 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="sg-core" Jan 31 07:55:04 crc kubenswrapper[4826]: I0131 07:55:04.988457 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" containerName="proxy-httpd" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.002289 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.005292 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.006162 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.006482 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.055588 4826 scope.go:117] "RemoveContainer" containerID="63fdd3fc84afb9a327778187df782a89e2f3fc687e937bd903123b7c18d4ed83" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.075591 4826 scope.go:117] "RemoveContainer" containerID="7401cd8208a9b706d9921f1e190068f8c69e6d52700a94887b26dd779773296e" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.125543 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.125625 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-config-data\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.125700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.125866 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-log-httpd\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.126100 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhjl\" (UniqueName: \"kubernetes.io/projected/529222c0-d23d-4cb3-b410-ecbeada1454d-kube-api-access-pdhjl\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.126125 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-run-httpd\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.126167 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-scripts\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhjl\" (UniqueName: \"kubernetes.io/projected/529222c0-d23d-4cb3-b410-ecbeada1454d-kube-api-access-pdhjl\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228138 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-run-httpd\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228175 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-scripts\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228212 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-config-data\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228306 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-log-httpd\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-run-httpd\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.228992 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-log-httpd\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.234058 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.234628 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-config-data\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.235299 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.244168 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-scripts\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.246765 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhjl\" (UniqueName: \"kubernetes.io/projected/529222c0-d23d-4cb3-b410-ecbeada1454d-kube-api-access-pdhjl\") pod \"ceilometer-0\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.330121 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.787582 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:05 crc kubenswrapper[4826]: W0131 07:55:05.787757 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod529222c0_d23d_4cb3_b410_ecbeada1454d.slice/crio-52f3dda433c139b5af37a1d7ed3ea94b384d72a782cf49e990f6c857744ce67a WatchSource:0}: Error finding container 52f3dda433c139b5af37a1d7ed3ea94b384d72a782cf49e990f6c857744ce67a: Status 404 returned error can't find the container with id 52f3dda433c139b5af37a1d7ed3ea94b384d72a782cf49e990f6c857744ce67a Jan 31 07:55:05 crc kubenswrapper[4826]: I0131 07:55:05.881471 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerStarted","Data":"52f3dda433c139b5af37a1d7ed3ea94b384d72a782cf49e990f6c857744ce67a"} Jan 31 07:55:06 crc kubenswrapper[4826]: I0131 07:55:06.819924 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d96d8b-9ab8-4421-bb39-879e200b1b50" path="/var/lib/kubelet/pods/76d96d8b-9ab8-4421-bb39-879e200b1b50/volumes" Jan 31 07:55:06 crc kubenswrapper[4826]: I0131 07:55:06.890736 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerStarted","Data":"2e331ab7d836704e1ac8a6e489b6da6a1ed4aa2e85372659ce62d0f58ee99a87"} Jan 31 07:55:07 crc kubenswrapper[4826]: I0131 07:55:07.909231 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerStarted","Data":"9a2a5e9e415f06e85eaaaafb5540e25f36feccfd015f9e51d464bda14c265105"} Jan 31 07:55:08 crc kubenswrapper[4826]: I0131 07:55:08.919734 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerStarted","Data":"2f1acc3bcff852d0ddf6dda90adabf339139977f3cdd14065e86b8e402c4270e"} Jan 31 07:55:09 crc kubenswrapper[4826]: I0131 07:55:09.930993 4826 generic.go:334] "Generic (PLEG): container finished" podID="8baa3438-0c11-4a8c-b397-85247a6252c1" containerID="142e6fc54b1d556e8615b29228030913bfa9bf6e5e794e62616c9c16b9179162" exitCode=0 Jan 31 07:55:09 crc kubenswrapper[4826]: I0131 07:55:09.930992 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" event={"ID":"8baa3438-0c11-4a8c-b397-85247a6252c1","Type":"ContainerDied","Data":"142e6fc54b1d556e8615b29228030913bfa9bf6e5e794e62616c9c16b9179162"} Jan 31 07:55:10 crc kubenswrapper[4826]: I0131 07:55:10.947313 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerStarted","Data":"9edb276bcf73a5fa1026194be539d0c5e076acd9f63346341f451d1a0790e35c"} Jan 31 07:55:10 crc kubenswrapper[4826]: I0131 07:55:10.949958 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:55:10 crc kubenswrapper[4826]: I0131 07:55:10.997295 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.054743159 podStartE2EDuration="6.997271336s" podCreationTimestamp="2026-01-31 07:55:04 +0000 UTC" firstStartedPulling="2026-01-31 07:55:05.790088526 +0000 UTC m=+1137.643974905" lastFinishedPulling="2026-01-31 07:55:09.732616723 +0000 UTC m=+1141.586503082" observedRunningTime="2026-01-31 07:55:10.979913835 +0000 UTC m=+1142.833800254" watchObservedRunningTime="2026-01-31 07:55:10.997271336 +0000 UTC m=+1142.851157705" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.373056 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.545115 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-config-data\") pod \"8baa3438-0c11-4a8c-b397-85247a6252c1\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.545227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss6bp\" (UniqueName: \"kubernetes.io/projected/8baa3438-0c11-4a8c-b397-85247a6252c1-kube-api-access-ss6bp\") pod \"8baa3438-0c11-4a8c-b397-85247a6252c1\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.545538 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-scripts\") pod \"8baa3438-0c11-4a8c-b397-85247a6252c1\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.545578 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-combined-ca-bundle\") pod \"8baa3438-0c11-4a8c-b397-85247a6252c1\" (UID: \"8baa3438-0c11-4a8c-b397-85247a6252c1\") " Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.554049 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8baa3438-0c11-4a8c-b397-85247a6252c1-kube-api-access-ss6bp" (OuterVolumeSpecName: "kube-api-access-ss6bp") pod "8baa3438-0c11-4a8c-b397-85247a6252c1" (UID: "8baa3438-0c11-4a8c-b397-85247a6252c1"). InnerVolumeSpecName "kube-api-access-ss6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.554286 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-scripts" (OuterVolumeSpecName: "scripts") pod "8baa3438-0c11-4a8c-b397-85247a6252c1" (UID: "8baa3438-0c11-4a8c-b397-85247a6252c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.591822 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-config-data" (OuterVolumeSpecName: "config-data") pod "8baa3438-0c11-4a8c-b397-85247a6252c1" (UID: "8baa3438-0c11-4a8c-b397-85247a6252c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.594058 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8baa3438-0c11-4a8c-b397-85247a6252c1" (UID: "8baa3438-0c11-4a8c-b397-85247a6252c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.648335 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.648561 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.648783 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baa3438-0c11-4a8c-b397-85247a6252c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.648900 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss6bp\" (UniqueName: \"kubernetes.io/projected/8baa3438-0c11-4a8c-b397-85247a6252c1-kube-api-access-ss6bp\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.958533 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" event={"ID":"8baa3438-0c11-4a8c-b397-85247a6252c1","Type":"ContainerDied","Data":"19476f6cfd7d2e29c3fca05a9d87137e6630ae8c126f7bd45814672b89d80413"} Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.960546 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19476f6cfd7d2e29c3fca05a9d87137e6630ae8c126f7bd45814672b89d80413" Jan 31 07:55:11 crc kubenswrapper[4826]: I0131 07:55:11.958560 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l4j5z" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.067922 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 07:55:12 crc kubenswrapper[4826]: E0131 07:55:12.068289 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baa3438-0c11-4a8c-b397-85247a6252c1" containerName="nova-cell0-conductor-db-sync" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.068307 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baa3438-0c11-4a8c-b397-85247a6252c1" containerName="nova-cell0-conductor-db-sync" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.068466 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8baa3438-0c11-4a8c-b397-85247a6252c1" containerName="nova-cell0-conductor-db-sync" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.068998 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.070948 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.071269 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lr5gx" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.084025 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.156873 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b210b6f-8e41-4893-9459-2668e1eb96a7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.156944 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b210b6f-8e41-4893-9459-2668e1eb96a7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.157262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkqdq\" (UniqueName: \"kubernetes.io/projected/8b210b6f-8e41-4893-9459-2668e1eb96a7-kube-api-access-qkqdq\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.258818 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b210b6f-8e41-4893-9459-2668e1eb96a7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.258872 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b210b6f-8e41-4893-9459-2668e1eb96a7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.258949 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkqdq\" (UniqueName: \"kubernetes.io/projected/8b210b6f-8e41-4893-9459-2668e1eb96a7-kube-api-access-qkqdq\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.264308 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b210b6f-8e41-4893-9459-2668e1eb96a7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.264902 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b210b6f-8e41-4893-9459-2668e1eb96a7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.281165 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkqdq\" (UniqueName: \"kubernetes.io/projected/8b210b6f-8e41-4893-9459-2668e1eb96a7-kube-api-access-qkqdq\") pod \"nova-cell0-conductor-0\" (UID: \"8b210b6f-8e41-4893-9459-2668e1eb96a7\") " pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.388057 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:12 crc kubenswrapper[4826]: W0131 07:55:12.826897 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b210b6f_8e41_4893_9459_2668e1eb96a7.slice/crio-6c86942c97eed18765243f314aaf876892ab743d766dc92ec4a1fa0ea69336a5 WatchSource:0}: Error finding container 6c86942c97eed18765243f314aaf876892ab743d766dc92ec4a1fa0ea69336a5: Status 404 returned error can't find the container with id 6c86942c97eed18765243f314aaf876892ab743d766dc92ec4a1fa0ea69336a5 Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.831813 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 07:55:12 crc kubenswrapper[4826]: I0131 07:55:12.968474 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8b210b6f-8e41-4893-9459-2668e1eb96a7","Type":"ContainerStarted","Data":"6c86942c97eed18765243f314aaf876892ab743d766dc92ec4a1fa0ea69336a5"} Jan 31 07:55:13 crc kubenswrapper[4826]: I0131 07:55:13.977138 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8b210b6f-8e41-4893-9459-2668e1eb96a7","Type":"ContainerStarted","Data":"da3b6a98d4a1aeca16ce4d1aeaad1fcc49b307145259ed52a6e474c198484cc3"} Jan 31 07:55:13 crc kubenswrapper[4826]: I0131 07:55:13.977513 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:13 crc kubenswrapper[4826]: I0131 07:55:13.997614 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.997598134 podStartE2EDuration="1.997598134s" podCreationTimestamp="2026-01-31 07:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:13.992217732 +0000 UTC m=+1145.846104091" watchObservedRunningTime="2026-01-31 07:55:13.997598134 +0000 UTC m=+1145.851484493" Jan 31 07:55:22 crc kubenswrapper[4826]: I0131 07:55:22.412546 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.257579 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vlfm5"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.258773 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.261075 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.261262 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.267890 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlfm5"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.366038 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-424zz\" (UniqueName: \"kubernetes.io/projected/883150ac-2f32-44c0-af19-2d5b94f385eb-kube-api-access-424zz\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.366249 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-config-data\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.366278 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-scripts\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.366296 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.422584 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.424175 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.430513 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.445090 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.466097 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.467879 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.468458 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-config-data\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.468496 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-scripts\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.468524 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.468594 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-424zz\" (UniqueName: \"kubernetes.io/projected/883150ac-2f32-44c0-af19-2d5b94f385eb-kube-api-access-424zz\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.469637 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.481444 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-scripts\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.492720 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.506738 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-config-data\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.511726 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-424zz\" (UniqueName: \"kubernetes.io/projected/883150ac-2f32-44c0-af19-2d5b94f385eb-kube-api-access-424zz\") pod \"nova-cell0-cell-mapping-vlfm5\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.527531 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.564931 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.566269 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570117 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-config-data\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570194 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cjg\" (UniqueName: \"kubernetes.io/projected/998defd1-79f9-46b1-8b4f-00b2d99b97c2-kube-api-access-x7cjg\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570234 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570290 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570345 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-config-data\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570365 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btr4\" (UniqueName: \"kubernetes.io/projected/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-kube-api-access-7btr4\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.570392 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/998defd1-79f9-46b1-8b4f-00b2d99b97c2-logs\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.576784 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.579636 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.585018 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.674373 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675202 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675301 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-config-data\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btr4\" (UniqueName: \"kubernetes.io/projected/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-kube-api-access-7btr4\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675358 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/998defd1-79f9-46b1-8b4f-00b2d99b97c2-logs\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675399 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm64f\" (UniqueName: \"kubernetes.io/projected/2c0bb381-a294-472b-bd7c-db1a23b96118-kube-api-access-vm64f\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675490 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-config-data\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675558 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675588 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.675623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7cjg\" (UniqueName: \"kubernetes.io/projected/998defd1-79f9-46b1-8b4f-00b2d99b97c2-kube-api-access-x7cjg\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.681866 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/998defd1-79f9-46b1-8b4f-00b2d99b97c2-logs\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.688881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.693186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-config-data\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.733964 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-config-data\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.737073 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7cjg\" (UniqueName: \"kubernetes.io/projected/998defd1-79f9-46b1-8b4f-00b2d99b97c2-kube-api-access-x7cjg\") pod \"nova-api-0\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.738310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.738557 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.740211 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.749306 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.751350 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.760610 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btr4\" (UniqueName: \"kubernetes.io/projected/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-kube-api-access-7btr4\") pod \"nova-scheduler-0\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.767156 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777249 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtg9\" (UniqueName: \"kubernetes.io/projected/c58d3b6c-1a69-42d8-ae41-e4346094e859-kube-api-access-trtg9\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777336 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm64f\" (UniqueName: \"kubernetes.io/projected/2c0bb381-a294-472b-bd7c-db1a23b96118-kube-api-access-vm64f\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777366 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777429 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777449 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777468 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-config-data\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.777488 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c58d3b6c-1a69-42d8-ae41-e4346094e859-logs\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.797221 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.799234 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.813732 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm64f\" (UniqueName: \"kubernetes.io/projected/2c0bb381-a294-472b-bd7c-db1a23b96118-kube-api-access-vm64f\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.830649 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-svkk9"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.832156 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.841381 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-svkk9"] Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.878940 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879030 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tpf\" (UniqueName: \"kubernetes.io/projected/281ee141-2543-4d23-a1d6-cb0d972a05e6-kube-api-access-q4tpf\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879060 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879083 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879127 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-config\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-config-data\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879204 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c58d3b6c-1a69-42d8-ae41-e4346094e859-logs\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879223 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.879258 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtg9\" (UniqueName: \"kubernetes.io/projected/c58d3b6c-1a69-42d8-ae41-e4346094e859-kube-api-access-trtg9\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.885280 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c58d3b6c-1a69-42d8-ae41-e4346094e859-logs\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.887868 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-config-data\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.888406 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.905940 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtg9\" (UniqueName: \"kubernetes.io/projected/c58d3b6c-1a69-42d8-ae41-e4346094e859-kube-api-access-trtg9\") pod \"nova-metadata-0\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " pod="openstack/nova-metadata-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.973583 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.984104 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4tpf\" (UniqueName: \"kubernetes.io/projected/281ee141-2543-4d23-a1d6-cb0d972a05e6-kube-api-access-q4tpf\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.984169 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.984228 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-config\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.984276 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.984356 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.985567 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.985764 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.985999 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-config\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:23 crc kubenswrapper[4826]: I0131 07:55:23.986221 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.006514 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4tpf\" (UniqueName: \"kubernetes.io/projected/281ee141-2543-4d23-a1d6-cb0d972a05e6-kube-api-access-q4tpf\") pod \"dnsmasq-dns-8b8cf6657-svkk9\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.111292 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.121879 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.163563 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.304020 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlfm5"] Jan 31 07:55:24 crc kubenswrapper[4826]: W0131 07:55:24.320748 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883150ac_2f32_44c0_af19_2d5b94f385eb.slice/crio-7c080556f6731d1aa04ba9ca4a7ab92977b7e840d8658139de7076ebb012571d WatchSource:0}: Error finding container 7c080556f6731d1aa04ba9ca4a7ab92977b7e840d8658139de7076ebb012571d: Status 404 returned error can't find the container with id 7c080556f6731d1aa04ba9ca4a7ab92977b7e840d8658139de7076ebb012571d Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.351025 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-llhxq"] Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.352296 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.356749 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.357570 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.373213 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-llhxq"] Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.397372 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.397406 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-scripts\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.397431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6kc\" (UniqueName: \"kubernetes.io/projected/2a5eb55e-27a4-4e01-b087-590ba6ff5421-kube-api-access-qx6kc\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.397449 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-config-data\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.408235 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.499440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.500236 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-scripts\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.500281 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6kc\" (UniqueName: \"kubernetes.io/projected/2a5eb55e-27a4-4e01-b087-590ba6ff5421-kube-api-access-qx6kc\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.500311 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-config-data\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.505492 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-scripts\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.524692 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6kc\" (UniqueName: \"kubernetes.io/projected/2a5eb55e-27a4-4e01-b087-590ba6ff5421-kube-api-access-qx6kc\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.526688 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.527221 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-config-data\") pod \"nova-cell1-conductor-db-sync-llhxq\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.541286 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.676799 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.689492 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.850383 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:24 crc kubenswrapper[4826]: I0131 07:55:24.867116 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-svkk9"] Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.098332 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c58d3b6c-1a69-42d8-ae41-e4346094e859","Type":"ContainerStarted","Data":"a0ea90a7c76784c395e5adbb5459e168ef0af37708a80f215b091934813c4d06"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.099444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d","Type":"ContainerStarted","Data":"8673faa64e3d5a8b0e5d07f425e6d7ed3e1148949fa755a6d6e13fd572ed407f"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.100876 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlfm5" event={"ID":"883150ac-2f32-44c0-af19-2d5b94f385eb","Type":"ContainerStarted","Data":"0b4199b6aa9ce1136e187c461a738802c4ef1857f1e254384144d7272aa059f7"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.100915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlfm5" event={"ID":"883150ac-2f32-44c0-af19-2d5b94f385eb","Type":"ContainerStarted","Data":"7c080556f6731d1aa04ba9ca4a7ab92977b7e840d8658139de7076ebb012571d"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.130195 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"998defd1-79f9-46b1-8b4f-00b2d99b97c2","Type":"ContainerStarted","Data":"630cb3856c842fff81d92643e24dad2caf2361fd05f36da4bd979ff52cabf4b9"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.132429 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vlfm5" podStartSLOduration=2.132408261 podStartE2EDuration="2.132408261s" podCreationTimestamp="2026-01-31 07:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:25.129735115 +0000 UTC m=+1156.983621484" watchObservedRunningTime="2026-01-31 07:55:25.132408261 +0000 UTC m=+1156.986294620" Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.143161 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" event={"ID":"281ee141-2543-4d23-a1d6-cb0d972a05e6","Type":"ContainerStarted","Data":"d7c43781d09b2279b4480ec204bcae0a6e43192c077a52fb11d4fd0ca953b297"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.155089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c0bb381-a294-472b-bd7c-db1a23b96118","Type":"ContainerStarted","Data":"99907ee1671ffa9de95b214233e266e04630de292617fc63a4565b5f122f055e"} Jan 31 07:55:25 crc kubenswrapper[4826]: I0131 07:55:25.227697 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-llhxq"] Jan 31 07:55:25 crc kubenswrapper[4826]: W0131 07:55:25.247837 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5eb55e_27a4_4e01_b087_590ba6ff5421.slice/crio-ef1bb759f3a7d310faad2e6d467ef563f55b0b69bfab17fc3e493b4f8ab42791 WatchSource:0}: Error finding container ef1bb759f3a7d310faad2e6d467ef563f55b0b69bfab17fc3e493b4f8ab42791: Status 404 returned error can't find the container with id ef1bb759f3a7d310faad2e6d467ef563f55b0b69bfab17fc3e493b4f8ab42791 Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.172329 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-llhxq" event={"ID":"2a5eb55e-27a4-4e01-b087-590ba6ff5421","Type":"ContainerStarted","Data":"55a3ca03a956bc935cc2a85cc654217e356a65297f73ce3f8e6d29ed6b83bb20"} Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.172832 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-llhxq" event={"ID":"2a5eb55e-27a4-4e01-b087-590ba6ff5421","Type":"ContainerStarted","Data":"ef1bb759f3a7d310faad2e6d467ef563f55b0b69bfab17fc3e493b4f8ab42791"} Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.175453 4826 generic.go:334] "Generic (PLEG): container finished" podID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerID="5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c" exitCode=0 Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.177718 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" event={"ID":"281ee141-2543-4d23-a1d6-cb0d972a05e6","Type":"ContainerDied","Data":"5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c"} Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.200680 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-llhxq" podStartSLOduration=2.20066208 podStartE2EDuration="2.20066208s" podCreationTimestamp="2026-01-31 07:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:26.189281297 +0000 UTC m=+1158.043167656" watchObservedRunningTime="2026-01-31 07:55:26.20066208 +0000 UTC m=+1158.054548429" Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.990844 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:26 crc kubenswrapper[4826]: I0131 07:55:26.998320 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.205623 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" event={"ID":"281ee141-2543-4d23-a1d6-cb0d972a05e6","Type":"ContainerStarted","Data":"14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.206523 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.208257 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c0bb381-a294-472b-bd7c-db1a23b96118","Type":"ContainerStarted","Data":"ea8915165064a4c3c900c95cd1e0d492ae7bfdb310b0cab003cf2db2e9053331"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.208342 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2c0bb381-a294-472b-bd7c-db1a23b96118" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ea8915165064a4c3c900c95cd1e0d492ae7bfdb310b0cab003cf2db2e9053331" gracePeriod=30 Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.218986 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c58d3b6c-1a69-42d8-ae41-e4346094e859","Type":"ContainerStarted","Data":"d5423d1249c9dc96e4d2689d7be9aeecb52cea83e15b7d4df9d9ab9ec082b4d4"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.219311 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c58d3b6c-1a69-42d8-ae41-e4346094e859","Type":"ContainerStarted","Data":"fbd08d4a0f2cee05c0633ab0aa8c819dfc962480fd077ff72485aae39c40fade"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.219117 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-metadata" containerID="cri-o://d5423d1249c9dc96e4d2689d7be9aeecb52cea83e15b7d4df9d9ab9ec082b4d4" gracePeriod=30 Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.219068 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-log" containerID="cri-o://fbd08d4a0f2cee05c0633ab0aa8c819dfc962480fd077ff72485aae39c40fade" gracePeriod=30 Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.221596 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d","Type":"ContainerStarted","Data":"ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.226498 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"998defd1-79f9-46b1-8b4f-00b2d99b97c2","Type":"ContainerStarted","Data":"1e53991d01968c0805db1522b47f5d927b04f0871f4e3e74b063cf47ec01dbe5"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.226554 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"998defd1-79f9-46b1-8b4f-00b2d99b97c2","Type":"ContainerStarted","Data":"b748ce92079b87e2284f1bae2f4a521ded4a4b62bc7ea9a88cb02e25066bec17"} Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.245686 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" podStartSLOduration=6.245662893 podStartE2EDuration="6.245662893s" podCreationTimestamp="2026-01-31 07:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:29.235224207 +0000 UTC m=+1161.089110596" watchObservedRunningTime="2026-01-31 07:55:29.245662893 +0000 UTC m=+1161.099549252" Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.272047 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.053616254 podStartE2EDuration="6.2720297s" podCreationTimestamp="2026-01-31 07:55:23 +0000 UTC" firstStartedPulling="2026-01-31 07:55:24.88951964 +0000 UTC m=+1156.743405999" lastFinishedPulling="2026-01-31 07:55:28.107933086 +0000 UTC m=+1159.961819445" observedRunningTime="2026-01-31 07:55:29.266285267 +0000 UTC m=+1161.120171656" watchObservedRunningTime="2026-01-31 07:55:29.2720297 +0000 UTC m=+1161.125916059" Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.287806 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.58642964 podStartE2EDuration="6.287787286s" podCreationTimestamp="2026-01-31 07:55:23 +0000 UTC" firstStartedPulling="2026-01-31 07:55:24.408948487 +0000 UTC m=+1156.262834836" lastFinishedPulling="2026-01-31 07:55:28.110306123 +0000 UTC m=+1159.964192482" observedRunningTime="2026-01-31 07:55:29.28755948 +0000 UTC m=+1161.141445839" watchObservedRunningTime="2026-01-31 07:55:29.287787286 +0000 UTC m=+1161.141673645" Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.307999 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.762512598 podStartE2EDuration="6.307956377s" podCreationTimestamp="2026-01-31 07:55:23 +0000 UTC" firstStartedPulling="2026-01-31 07:55:24.564566416 +0000 UTC m=+1156.418452775" lastFinishedPulling="2026-01-31 07:55:28.110010195 +0000 UTC m=+1159.963896554" observedRunningTime="2026-01-31 07:55:29.30205103 +0000 UTC m=+1161.155937379" watchObservedRunningTime="2026-01-31 07:55:29.307956377 +0000 UTC m=+1161.161842736" Jan 31 07:55:29 crc kubenswrapper[4826]: I0131 07:55:29.322416 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.91012036 podStartE2EDuration="6.322399527s" podCreationTimestamp="2026-01-31 07:55:23 +0000 UTC" firstStartedPulling="2026-01-31 07:55:24.689936837 +0000 UTC m=+1156.543823206" lastFinishedPulling="2026-01-31 07:55:28.102216014 +0000 UTC m=+1159.956102373" observedRunningTime="2026-01-31 07:55:29.314185064 +0000 UTC m=+1161.168071423" watchObservedRunningTime="2026-01-31 07:55:29.322399527 +0000 UTC m=+1161.176285886" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.242860 4826 generic.go:334] "Generic (PLEG): container finished" podID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerID="d5423d1249c9dc96e4d2689d7be9aeecb52cea83e15b7d4df9d9ab9ec082b4d4" exitCode=0 Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.243257 4826 generic.go:334] "Generic (PLEG): container finished" podID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerID="fbd08d4a0f2cee05c0633ab0aa8c819dfc962480fd077ff72485aae39c40fade" exitCode=143 Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.243878 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c58d3b6c-1a69-42d8-ae41-e4346094e859","Type":"ContainerDied","Data":"d5423d1249c9dc96e4d2689d7be9aeecb52cea83e15b7d4df9d9ab9ec082b4d4"} Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.243920 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c58d3b6c-1a69-42d8-ae41-e4346094e859","Type":"ContainerDied","Data":"fbd08d4a0f2cee05c0633ab0aa8c819dfc962480fd077ff72485aae39c40fade"} Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.424889 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.568428 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-combined-ca-bundle\") pod \"c58d3b6c-1a69-42d8-ae41-e4346094e859\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.569042 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c58d3b6c-1a69-42d8-ae41-e4346094e859-logs\") pod \"c58d3b6c-1a69-42d8-ae41-e4346094e859\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.569101 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trtg9\" (UniqueName: \"kubernetes.io/projected/c58d3b6c-1a69-42d8-ae41-e4346094e859-kube-api-access-trtg9\") pod \"c58d3b6c-1a69-42d8-ae41-e4346094e859\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.569187 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-config-data\") pod \"c58d3b6c-1a69-42d8-ae41-e4346094e859\" (UID: \"c58d3b6c-1a69-42d8-ae41-e4346094e859\") " Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.569545 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c58d3b6c-1a69-42d8-ae41-e4346094e859-logs" (OuterVolumeSpecName: "logs") pod "c58d3b6c-1a69-42d8-ae41-e4346094e859" (UID: "c58d3b6c-1a69-42d8-ae41-e4346094e859"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.580557 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58d3b6c-1a69-42d8-ae41-e4346094e859-kube-api-access-trtg9" (OuterVolumeSpecName: "kube-api-access-trtg9") pod "c58d3b6c-1a69-42d8-ae41-e4346094e859" (UID: "c58d3b6c-1a69-42d8-ae41-e4346094e859"). InnerVolumeSpecName "kube-api-access-trtg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.597059 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c58d3b6c-1a69-42d8-ae41-e4346094e859" (UID: "c58d3b6c-1a69-42d8-ae41-e4346094e859"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.629214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-config-data" (OuterVolumeSpecName: "config-data") pod "c58d3b6c-1a69-42d8-ae41-e4346094e859" (UID: "c58d3b6c-1a69-42d8-ae41-e4346094e859"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.671258 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c58d3b6c-1a69-42d8-ae41-e4346094e859-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.671301 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trtg9\" (UniqueName: \"kubernetes.io/projected/c58d3b6c-1a69-42d8-ae41-e4346094e859-kube-api-access-trtg9\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.671319 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:30 crc kubenswrapper[4826]: I0131 07:55:30.671334 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58d3b6c-1a69-42d8-ae41-e4346094e859-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.255312 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c58d3b6c-1a69-42d8-ae41-e4346094e859","Type":"ContainerDied","Data":"a0ea90a7c76784c395e5adbb5459e168ef0af37708a80f215b091934813c4d06"} Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.255363 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.255369 4826 scope.go:117] "RemoveContainer" containerID="d5423d1249c9dc96e4d2689d7be9aeecb52cea83e15b7d4df9d9ab9ec082b4d4" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.300033 4826 scope.go:117] "RemoveContainer" containerID="fbd08d4a0f2cee05c0633ab0aa8c819dfc962480fd077ff72485aae39c40fade" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.318316 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.339557 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.350921 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:31 crc kubenswrapper[4826]: E0131 07:55:31.351465 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-log" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.351484 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-log" Jan 31 07:55:31 crc kubenswrapper[4826]: E0131 07:55:31.351535 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-metadata" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.351545 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-metadata" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.352463 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-log" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.352509 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" containerName="nova-metadata-metadata" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.353712 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.355913 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.358113 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.364371 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.488382 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg7qg\" (UniqueName: \"kubernetes.io/projected/cfc61bfb-5013-4d84-9548-4882d3b389ae-kube-api-access-xg7qg\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.488786 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc61bfb-5013-4d84-9548-4882d3b389ae-logs\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.488828 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-config-data\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.488851 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.488975 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.590574 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-config-data\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.590634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.590744 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.590808 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg7qg\" (UniqueName: \"kubernetes.io/projected/cfc61bfb-5013-4d84-9548-4882d3b389ae-kube-api-access-xg7qg\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.590841 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc61bfb-5013-4d84-9548-4882d3b389ae-logs\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.591627 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc61bfb-5013-4d84-9548-4882d3b389ae-logs\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.595291 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-config-data\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.595298 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.598040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.622992 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg7qg\" (UniqueName: \"kubernetes.io/projected/cfc61bfb-5013-4d84-9548-4882d3b389ae-kube-api-access-xg7qg\") pod \"nova-metadata-0\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " pod="openstack/nova-metadata-0" Jan 31 07:55:31 crc kubenswrapper[4826]: I0131 07:55:31.673084 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:32 crc kubenswrapper[4826]: I0131 07:55:32.139035 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:32 crc kubenswrapper[4826]: I0131 07:55:32.273421 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc61bfb-5013-4d84-9548-4882d3b389ae","Type":"ContainerStarted","Data":"2a8a375516c2e15ef20cae8c3bf7b411fa052dfd9c490a1bac88c4858030e073"} Jan 31 07:55:32 crc kubenswrapper[4826]: I0131 07:55:32.824772 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58d3b6c-1a69-42d8-ae41-e4346094e859" path="/var/lib/kubelet/pods/c58d3b6c-1a69-42d8-ae41-e4346094e859/volumes" Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.285412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc61bfb-5013-4d84-9548-4882d3b389ae","Type":"ContainerStarted","Data":"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80"} Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.285480 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc61bfb-5013-4d84-9548-4882d3b389ae","Type":"ContainerStarted","Data":"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d"} Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.287499 4826 generic.go:334] "Generic (PLEG): container finished" podID="883150ac-2f32-44c0-af19-2d5b94f385eb" containerID="0b4199b6aa9ce1136e187c461a738802c4ef1857f1e254384144d7272aa059f7" exitCode=0 Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.287556 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlfm5" event={"ID":"883150ac-2f32-44c0-af19-2d5b94f385eb","Type":"ContainerDied","Data":"0b4199b6aa9ce1136e187c461a738802c4ef1857f1e254384144d7272aa059f7"} Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.335688 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.335659597 podStartE2EDuration="2.335659597s" podCreationTimestamp="2026-01-31 07:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:33.313926102 +0000 UTC m=+1165.167812511" watchObservedRunningTime="2026-01-31 07:55:33.335659597 +0000 UTC m=+1165.189545996" Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.752814 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.753334 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.973921 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 07:55:33 crc kubenswrapper[4826]: I0131 07:55:33.973981 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.008156 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.111709 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.166237 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.246885 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-w9dvm"] Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.247258 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerName="dnsmasq-dns" containerID="cri-o://b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58" gracePeriod=10 Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.400747 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.836160 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.836349 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.910863 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:34 crc kubenswrapper[4826]: I0131 07:55:34.918474 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010497 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-config-data\") pod \"883150ac-2f32-44c0-af19-2d5b94f385eb\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010552 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwvtt\" (UniqueName: \"kubernetes.io/projected/5854fd2e-9542-4dda-80f2-8ebf777c620b-kube-api-access-mwvtt\") pod \"5854fd2e-9542-4dda-80f2-8ebf777c620b\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010626 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-combined-ca-bundle\") pod \"883150ac-2f32-44c0-af19-2d5b94f385eb\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010659 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-sb\") pod \"5854fd2e-9542-4dda-80f2-8ebf777c620b\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-dns-svc\") pod \"5854fd2e-9542-4dda-80f2-8ebf777c620b\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010782 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-scripts\") pod \"883150ac-2f32-44c0-af19-2d5b94f385eb\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010809 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-424zz\" (UniqueName: \"kubernetes.io/projected/883150ac-2f32-44c0-af19-2d5b94f385eb-kube-api-access-424zz\") pod \"883150ac-2f32-44c0-af19-2d5b94f385eb\" (UID: \"883150ac-2f32-44c0-af19-2d5b94f385eb\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010844 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-nb\") pod \"5854fd2e-9542-4dda-80f2-8ebf777c620b\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.010868 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-config\") pod \"5854fd2e-9542-4dda-80f2-8ebf777c620b\" (UID: \"5854fd2e-9542-4dda-80f2-8ebf777c620b\") " Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.018140 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5854fd2e-9542-4dda-80f2-8ebf777c620b-kube-api-access-mwvtt" (OuterVolumeSpecName: "kube-api-access-mwvtt") pod "5854fd2e-9542-4dda-80f2-8ebf777c620b" (UID: "5854fd2e-9542-4dda-80f2-8ebf777c620b"). InnerVolumeSpecName "kube-api-access-mwvtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.018938 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-scripts" (OuterVolumeSpecName: "scripts") pod "883150ac-2f32-44c0-af19-2d5b94f385eb" (UID: "883150ac-2f32-44c0-af19-2d5b94f385eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.020858 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883150ac-2f32-44c0-af19-2d5b94f385eb-kube-api-access-424zz" (OuterVolumeSpecName: "kube-api-access-424zz") pod "883150ac-2f32-44c0-af19-2d5b94f385eb" (UID: "883150ac-2f32-44c0-af19-2d5b94f385eb"). InnerVolumeSpecName "kube-api-access-424zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.056400 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883150ac-2f32-44c0-af19-2d5b94f385eb" (UID: "883150ac-2f32-44c0-af19-2d5b94f385eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.057708 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5854fd2e-9542-4dda-80f2-8ebf777c620b" (UID: "5854fd2e-9542-4dda-80f2-8ebf777c620b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.071366 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-config-data" (OuterVolumeSpecName: "config-data") pod "883150ac-2f32-44c0-af19-2d5b94f385eb" (UID: "883150ac-2f32-44c0-af19-2d5b94f385eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.073996 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5854fd2e-9542-4dda-80f2-8ebf777c620b" (UID: "5854fd2e-9542-4dda-80f2-8ebf777c620b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.079801 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-config" (OuterVolumeSpecName: "config") pod "5854fd2e-9542-4dda-80f2-8ebf777c620b" (UID: "5854fd2e-9542-4dda-80f2-8ebf777c620b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.081617 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5854fd2e-9542-4dda-80f2-8ebf777c620b" (UID: "5854fd2e-9542-4dda-80f2-8ebf777c620b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112702 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112747 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112762 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112780 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwvtt\" (UniqueName: \"kubernetes.io/projected/5854fd2e-9542-4dda-80f2-8ebf777c620b-kube-api-access-mwvtt\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112792 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112802 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112812 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5854fd2e-9542-4dda-80f2-8ebf777c620b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112821 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883150ac-2f32-44c0-af19-2d5b94f385eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.112834 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-424zz\" (UniqueName: \"kubernetes.io/projected/883150ac-2f32-44c0-af19-2d5b94f385eb-kube-api-access-424zz\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.305207 4826 generic.go:334] "Generic (PLEG): container finished" podID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerID="b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58" exitCode=0 Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.305302 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.305300 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" event={"ID":"5854fd2e-9542-4dda-80f2-8ebf777c620b","Type":"ContainerDied","Data":"b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58"} Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.305590 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-w9dvm" event={"ID":"5854fd2e-9542-4dda-80f2-8ebf777c620b","Type":"ContainerDied","Data":"e5e226e62f7fe9a0de42e579d8cb3b81c2b34fa853644f261287d5cd1d286d6b"} Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.305625 4826 scope.go:117] "RemoveContainer" containerID="b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.307513 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlfm5" event={"ID":"883150ac-2f32-44c0-af19-2d5b94f385eb","Type":"ContainerDied","Data":"7c080556f6731d1aa04ba9ca4a7ab92977b7e840d8658139de7076ebb012571d"} Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.307556 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c080556f6731d1aa04ba9ca4a7ab92977b7e840d8658139de7076ebb012571d" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.307554 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlfm5" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.329644 4826 scope.go:117] "RemoveContainer" containerID="16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.372379 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-w9dvm"] Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.382229 4826 scope.go:117] "RemoveContainer" containerID="b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58" Jan 31 07:55:35 crc kubenswrapper[4826]: E0131 07:55:35.388123 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58\": container with ID starting with b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58 not found: ID does not exist" containerID="b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.388184 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58"} err="failed to get container status \"b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58\": rpc error: code = NotFound desc = could not find container \"b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58\": container with ID starting with b6b7b4050974efe7659cee187265944752b597181d9123b56062d203fb77cd58 not found: ID does not exist" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.388222 4826 scope.go:117] "RemoveContainer" containerID="16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.392038 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-w9dvm"] Jan 31 07:55:35 crc kubenswrapper[4826]: E0131 07:55:35.394084 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0\": container with ID starting with 16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0 not found: ID does not exist" containerID="16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.394124 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0"} err="failed to get container status \"16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0\": rpc error: code = NotFound desc = could not find container \"16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0\": container with ID starting with 16a30fa75ce36595dc984bbb6e27af489f827baff5e6de2d144e31253ddcece0 not found: ID does not exist" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.440063 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.633361 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.633632 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-log" containerID="cri-o://b748ce92079b87e2284f1bae2f4a521ded4a4b62bc7ea9a88cb02e25066bec17" gracePeriod=30 Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.633753 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-api" containerID="cri-o://1e53991d01968c0805db1522b47f5d927b04f0871f4e3e74b063cf47ec01dbe5" gracePeriod=30 Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.644239 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.656982 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.657240 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-log" containerID="cri-o://2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d" gracePeriod=30 Jan 31 07:55:35 crc kubenswrapper[4826]: I0131 07:55:35.657653 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-metadata" containerID="cri-o://505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80" gracePeriod=30 Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.155493 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.234674 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc61bfb-5013-4d84-9548-4882d3b389ae-logs\") pod \"cfc61bfb-5013-4d84-9548-4882d3b389ae\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.234736 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-config-data\") pod \"cfc61bfb-5013-4d84-9548-4882d3b389ae\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.234867 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg7qg\" (UniqueName: \"kubernetes.io/projected/cfc61bfb-5013-4d84-9548-4882d3b389ae-kube-api-access-xg7qg\") pod \"cfc61bfb-5013-4d84-9548-4882d3b389ae\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.234945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-combined-ca-bundle\") pod \"cfc61bfb-5013-4d84-9548-4882d3b389ae\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.235018 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-nova-metadata-tls-certs\") pod \"cfc61bfb-5013-4d84-9548-4882d3b389ae\" (UID: \"cfc61bfb-5013-4d84-9548-4882d3b389ae\") " Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.235315 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc61bfb-5013-4d84-9548-4882d3b389ae-logs" (OuterVolumeSpecName: "logs") pod "cfc61bfb-5013-4d84-9548-4882d3b389ae" (UID: "cfc61bfb-5013-4d84-9548-4882d3b389ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.263376 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc61bfb-5013-4d84-9548-4882d3b389ae-kube-api-access-xg7qg" (OuterVolumeSpecName: "kube-api-access-xg7qg") pod "cfc61bfb-5013-4d84-9548-4882d3b389ae" (UID: "cfc61bfb-5013-4d84-9548-4882d3b389ae"). InnerVolumeSpecName "kube-api-access-xg7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.269149 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-config-data" (OuterVolumeSpecName: "config-data") pod "cfc61bfb-5013-4d84-9548-4882d3b389ae" (UID: "cfc61bfb-5013-4d84-9548-4882d3b389ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.274394 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc61bfb-5013-4d84-9548-4882d3b389ae" (UID: "cfc61bfb-5013-4d84-9548-4882d3b389ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.320098 4826 generic.go:334] "Generic (PLEG): container finished" podID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerID="b748ce92079b87e2284f1bae2f4a521ded4a4b62bc7ea9a88cb02e25066bec17" exitCode=143 Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.320448 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"998defd1-79f9-46b1-8b4f-00b2d99b97c2","Type":"ContainerDied","Data":"b748ce92079b87e2284f1bae2f4a521ded4a4b62bc7ea9a88cb02e25066bec17"} Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.323000 4826 generic.go:334] "Generic (PLEG): container finished" podID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerID="505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80" exitCode=0 Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.323268 4826 generic.go:334] "Generic (PLEG): container finished" podID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerID="2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d" exitCode=143 Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.323144 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc61bfb-5013-4d84-9548-4882d3b389ae","Type":"ContainerDied","Data":"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80"} Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.323150 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.323854 4826 scope.go:117] "RemoveContainer" containerID="505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.323685 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc61bfb-5013-4d84-9548-4882d3b389ae","Type":"ContainerDied","Data":"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d"} Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.324086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc61bfb-5013-4d84-9548-4882d3b389ae","Type":"ContainerDied","Data":"2a8a375516c2e15ef20cae8c3bf7b411fa052dfd9c490a1bac88c4858030e073"} Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.324368 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" containerName="nova-scheduler-scheduler" containerID="cri-o://ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f" gracePeriod=30 Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.333136 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cfc61bfb-5013-4d84-9548-4882d3b389ae" (UID: "cfc61bfb-5013-4d84-9548-4882d3b389ae"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.337730 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg7qg\" (UniqueName: \"kubernetes.io/projected/cfc61bfb-5013-4d84-9548-4882d3b389ae-kube-api-access-xg7qg\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.337777 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.337797 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.337809 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc61bfb-5013-4d84-9548-4882d3b389ae-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.337820 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc61bfb-5013-4d84-9548-4882d3b389ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.363978 4826 scope.go:117] "RemoveContainer" containerID="2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.418281 4826 scope.go:117] "RemoveContainer" containerID="505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80" Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.419166 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80\": container with ID starting with 505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80 not found: ID does not exist" containerID="505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.419215 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80"} err="failed to get container status \"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80\": rpc error: code = NotFound desc = could not find container \"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80\": container with ID starting with 505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80 not found: ID does not exist" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.419250 4826 scope.go:117] "RemoveContainer" containerID="2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d" Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.420996 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d\": container with ID starting with 2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d not found: ID does not exist" containerID="2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.421026 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d"} err="failed to get container status \"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d\": rpc error: code = NotFound desc = could not find container \"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d\": container with ID starting with 2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d not found: ID does not exist" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.421042 4826 scope.go:117] "RemoveContainer" containerID="505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.421589 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80"} err="failed to get container status \"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80\": rpc error: code = NotFound desc = could not find container \"505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80\": container with ID starting with 505568870ec52f51b031c311af5599e85dee871983a57033f8fbf936fb780d80 not found: ID does not exist" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.421612 4826 scope.go:117] "RemoveContainer" containerID="2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.422002 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d"} err="failed to get container status \"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d\": rpc error: code = NotFound desc = could not find container \"2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d\": container with ID starting with 2fe6737cca41ac9566ad394e752534ef5184ed092ca645ae87f6af578fec6f8d not found: ID does not exist" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.658830 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.667724 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.686559 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.687364 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883150ac-2f32-44c0-af19-2d5b94f385eb" containerName="nova-manage" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687394 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="883150ac-2f32-44c0-af19-2d5b94f385eb" containerName="nova-manage" Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.687427 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerName="dnsmasq-dns" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687435 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerName="dnsmasq-dns" Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.687444 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-metadata" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687451 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-metadata" Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.687467 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerName="init" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687473 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerName="init" Jan 31 07:55:36 crc kubenswrapper[4826]: E0131 07:55:36.687481 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-log" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687492 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-log" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687670 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" containerName="dnsmasq-dns" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687720 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-metadata" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687744 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="883150ac-2f32-44c0-af19-2d5b94f385eb" containerName="nova-manage" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.687763 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" containerName="nova-metadata-log" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.689060 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.692247 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.692475 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.704777 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.758448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-config-data\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.758596 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3049b66-c738-431f-9c22-ccb9bdd664cd-logs\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.758983 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bt6\" (UniqueName: \"kubernetes.io/projected/c3049b66-c738-431f-9c22-ccb9bdd664cd-kube-api-access-l9bt6\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.759174 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.759402 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.822863 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5854fd2e-9542-4dda-80f2-8ebf777c620b" path="/var/lib/kubelet/pods/5854fd2e-9542-4dda-80f2-8ebf777c620b/volumes" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.824295 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc61bfb-5013-4d84-9548-4882d3b389ae" path="/var/lib/kubelet/pods/cfc61bfb-5013-4d84-9548-4882d3b389ae/volumes" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.868487 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9bt6\" (UniqueName: \"kubernetes.io/projected/c3049b66-c738-431f-9c22-ccb9bdd664cd-kube-api-access-l9bt6\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.870154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.871115 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.872750 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-config-data\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.873570 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3049b66-c738-431f-9c22-ccb9bdd664cd-logs\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.874410 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3049b66-c738-431f-9c22-ccb9bdd664cd-logs\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.876753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.877486 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-config-data\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.887178 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9bt6\" (UniqueName: \"kubernetes.io/projected/c3049b66-c738-431f-9c22-ccb9bdd664cd-kube-api-access-l9bt6\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:36 crc kubenswrapper[4826]: I0131 07:55:36.889576 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " pod="openstack/nova-metadata-0" Jan 31 07:55:37 crc kubenswrapper[4826]: I0131 07:55:37.007936 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:55:37 crc kubenswrapper[4826]: I0131 07:55:37.481916 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:55:38 crc kubenswrapper[4826]: I0131 07:55:38.344994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3049b66-c738-431f-9c22-ccb9bdd664cd","Type":"ContainerStarted","Data":"f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b"} Jan 31 07:55:38 crc kubenswrapper[4826]: I0131 07:55:38.345365 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3049b66-c738-431f-9c22-ccb9bdd664cd","Type":"ContainerStarted","Data":"0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee"} Jan 31 07:55:38 crc kubenswrapper[4826]: I0131 07:55:38.345377 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3049b66-c738-431f-9c22-ccb9bdd664cd","Type":"ContainerStarted","Data":"e6e0e06301daf60b87e8b52955571169bd7f2684e19682aa07b21a8649330c89"} Jan 31 07:55:38 crc kubenswrapper[4826]: I0131 07:55:38.365280 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.365254848 podStartE2EDuration="2.365254848s" podCreationTimestamp="2026-01-31 07:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:38.361934843 +0000 UTC m=+1170.215821212" watchObservedRunningTime="2026-01-31 07:55:38.365254848 +0000 UTC m=+1170.219141207" Jan 31 07:55:38 crc kubenswrapper[4826]: I0131 07:55:38.675639 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:55:38 crc kubenswrapper[4826]: I0131 07:55:38.675842 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d75dafb0-9882-4d3a-8c85-7be1dc197924" containerName="kube-state-metrics" containerID="cri-o://83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42" gracePeriod=30 Jan 31 07:55:38 crc kubenswrapper[4826]: E0131 07:55:38.980721 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:55:38 crc kubenswrapper[4826]: E0131 07:55:38.992701 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:55:38 crc kubenswrapper[4826]: E0131 07:55:38.995267 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:55:38 crc kubenswrapper[4826]: E0131 07:55:38.995366 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" containerName="nova-scheduler-scheduler" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.163856 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.329659 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdkms\" (UniqueName: \"kubernetes.io/projected/d75dafb0-9882-4d3a-8c85-7be1dc197924-kube-api-access-sdkms\") pod \"d75dafb0-9882-4d3a-8c85-7be1dc197924\" (UID: \"d75dafb0-9882-4d3a-8c85-7be1dc197924\") " Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.335874 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75dafb0-9882-4d3a-8c85-7be1dc197924-kube-api-access-sdkms" (OuterVolumeSpecName: "kube-api-access-sdkms") pod "d75dafb0-9882-4d3a-8c85-7be1dc197924" (UID: "d75dafb0-9882-4d3a-8c85-7be1dc197924"). InnerVolumeSpecName "kube-api-access-sdkms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.354095 4826 generic.go:334] "Generic (PLEG): container finished" podID="2a5eb55e-27a4-4e01-b087-590ba6ff5421" containerID="55a3ca03a956bc935cc2a85cc654217e356a65297f73ce3f8e6d29ed6b83bb20" exitCode=0 Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.354175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-llhxq" event={"ID":"2a5eb55e-27a4-4e01-b087-590ba6ff5421","Type":"ContainerDied","Data":"55a3ca03a956bc935cc2a85cc654217e356a65297f73ce3f8e6d29ed6b83bb20"} Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.357274 4826 generic.go:334] "Generic (PLEG): container finished" podID="d75dafb0-9882-4d3a-8c85-7be1dc197924" containerID="83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42" exitCode=2 Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.357562 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d75dafb0-9882-4d3a-8c85-7be1dc197924","Type":"ContainerDied","Data":"83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42"} Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.357580 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.357605 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d75dafb0-9882-4d3a-8c85-7be1dc197924","Type":"ContainerDied","Data":"7ef7481e187185c3f2009120d4ed4a16080a04c95f5b4f3aa7dd68d0fbdcd33b"} Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.357626 4826 scope.go:117] "RemoveContainer" containerID="83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.410192 4826 scope.go:117] "RemoveContainer" containerID="83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42" Jan 31 07:55:39 crc kubenswrapper[4826]: E0131 07:55:39.415322 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42\": container with ID starting with 83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42 not found: ID does not exist" containerID="83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.415578 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42"} err="failed to get container status \"83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42\": rpc error: code = NotFound desc = could not find container \"83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42\": container with ID starting with 83997fadddd58e1f5fb3489c5e255b924b8f97588561d10ff23c77e2338b8f42 not found: ID does not exist" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.427355 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.432654 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdkms\" (UniqueName: \"kubernetes.io/projected/d75dafb0-9882-4d3a-8c85-7be1dc197924-kube-api-access-sdkms\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.440837 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.452815 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:55:39 crc kubenswrapper[4826]: E0131 07:55:39.453252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75dafb0-9882-4d3a-8c85-7be1dc197924" containerName="kube-state-metrics" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.453274 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75dafb0-9882-4d3a-8c85-7be1dc197924" containerName="kube-state-metrics" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.453488 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75dafb0-9882-4d3a-8c85-7be1dc197924" containerName="kube-state-metrics" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.454368 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.456171 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.456408 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.479401 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.533878 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.534002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.534063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.534110 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qmv\" (UniqueName: \"kubernetes.io/projected/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-api-access-m9qmv\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.636737 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.636822 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.636877 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9qmv\" (UniqueName: \"kubernetes.io/projected/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-api-access-m9qmv\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.636990 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.640918 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.641681 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.649001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.662983 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9qmv\" (UniqueName: \"kubernetes.io/projected/ae3a26b8-4b55-49a9-90a6-e66bb00f1425-kube-api-access-m9qmv\") pod \"kube-state-metrics-0\" (UID: \"ae3a26b8-4b55-49a9-90a6-e66bb00f1425\") " pod="openstack/kube-state-metrics-0" Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.678587 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.678937 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-central-agent" containerID="cri-o://2e331ab7d836704e1ac8a6e489b6da6a1ed4aa2e85372659ce62d0f58ee99a87" gracePeriod=30 Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.679064 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-notification-agent" containerID="cri-o://9a2a5e9e415f06e85eaaaafb5540e25f36feccfd015f9e51d464bda14c265105" gracePeriod=30 Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.679008 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="proxy-httpd" containerID="cri-o://9edb276bcf73a5fa1026194be539d0c5e076acd9f63346341f451d1a0790e35c" gracePeriod=30 Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.679043 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="sg-core" containerID="cri-o://2f1acc3bcff852d0ddf6dda90adabf339139977f3cdd14065e86b8e402c4270e" gracePeriod=30 Jan 31 07:55:39 crc kubenswrapper[4826]: I0131 07:55:39.772407 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.277537 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.286577 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.371537 4826 generic.go:334] "Generic (PLEG): container finished" podID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerID="9edb276bcf73a5fa1026194be539d0c5e076acd9f63346341f451d1a0790e35c" exitCode=0 Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.371580 4826 generic.go:334] "Generic (PLEG): container finished" podID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerID="2f1acc3bcff852d0ddf6dda90adabf339139977f3cdd14065e86b8e402c4270e" exitCode=2 Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.371591 4826 generic.go:334] "Generic (PLEG): container finished" podID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerID="2e331ab7d836704e1ac8a6e489b6da6a1ed4aa2e85372659ce62d0f58ee99a87" exitCode=0 Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.371610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerDied","Data":"9edb276bcf73a5fa1026194be539d0c5e076acd9f63346341f451d1a0790e35c"} Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.371656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerDied","Data":"2f1acc3bcff852d0ddf6dda90adabf339139977f3cdd14065e86b8e402c4270e"} Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.371672 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerDied","Data":"2e331ab7d836704e1ac8a6e489b6da6a1ed4aa2e85372659ce62d0f58ee99a87"} Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.376999 4826 generic.go:334] "Generic (PLEG): container finished" podID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" containerID="ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f" exitCode=0 Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.377296 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d","Type":"ContainerDied","Data":"ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f"} Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.385799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae3a26b8-4b55-49a9-90a6-e66bb00f1425","Type":"ContainerStarted","Data":"dfa8bd906a045a7b5392678240d44bb97403aeb770be4817d4660083b39062bc"} Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.388398 4826 generic.go:334] "Generic (PLEG): container finished" podID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerID="1e53991d01968c0805db1522b47f5d927b04f0871f4e3e74b063cf47ec01dbe5" exitCode=0 Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.388462 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"998defd1-79f9-46b1-8b4f-00b2d99b97c2","Type":"ContainerDied","Data":"1e53991d01968c0805db1522b47f5d927b04f0871f4e3e74b063cf47ec01dbe5"} Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.463461 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.554097 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-config-data\") pod \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.554154 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7cjg\" (UniqueName: \"kubernetes.io/projected/998defd1-79f9-46b1-8b4f-00b2d99b97c2-kube-api-access-x7cjg\") pod \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.554182 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-combined-ca-bundle\") pod \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.554286 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/998defd1-79f9-46b1-8b4f-00b2d99b97c2-logs\") pod \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\" (UID: \"998defd1-79f9-46b1-8b4f-00b2d99b97c2\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.555820 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998defd1-79f9-46b1-8b4f-00b2d99b97c2-logs" (OuterVolumeSpecName: "logs") pod "998defd1-79f9-46b1-8b4f-00b2d99b97c2" (UID: "998defd1-79f9-46b1-8b4f-00b2d99b97c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.572715 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998defd1-79f9-46b1-8b4f-00b2d99b97c2-kube-api-access-x7cjg" (OuterVolumeSpecName: "kube-api-access-x7cjg") pod "998defd1-79f9-46b1-8b4f-00b2d99b97c2" (UID: "998defd1-79f9-46b1-8b4f-00b2d99b97c2"). InnerVolumeSpecName "kube-api-access-x7cjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.583928 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-config-data" (OuterVolumeSpecName: "config-data") pod "998defd1-79f9-46b1-8b4f-00b2d99b97c2" (UID: "998defd1-79f9-46b1-8b4f-00b2d99b97c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.590292 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.598081 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "998defd1-79f9-46b1-8b4f-00b2d99b97c2" (UID: "998defd1-79f9-46b1-8b4f-00b2d99b97c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.657047 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/998defd1-79f9-46b1-8b4f-00b2d99b97c2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.657085 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.657101 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7cjg\" (UniqueName: \"kubernetes.io/projected/998defd1-79f9-46b1-8b4f-00b2d99b97c2-kube-api-access-x7cjg\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.657117 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998defd1-79f9-46b1-8b4f-00b2d99b97c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.696779 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.760827 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-combined-ca-bundle\") pod \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.761099 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-config-data\") pod \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.761150 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7btr4\" (UniqueName: \"kubernetes.io/projected/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-kube-api-access-7btr4\") pod \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\" (UID: \"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.765232 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-kube-api-access-7btr4" (OuterVolumeSpecName: "kube-api-access-7btr4") pod "0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" (UID: "0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d"). InnerVolumeSpecName "kube-api-access-7btr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.784742 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" (UID: "0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.791549 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-config-data" (OuterVolumeSpecName: "config-data") pod "0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" (UID: "0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.820005 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d75dafb0-9882-4d3a-8c85-7be1dc197924" path="/var/lib/kubelet/pods/d75dafb0-9882-4d3a-8c85-7be1dc197924/volumes" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863334 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-config-data\") pod \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863394 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-combined-ca-bundle\") pod \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863423 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-scripts\") pod \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863455 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx6kc\" (UniqueName: \"kubernetes.io/projected/2a5eb55e-27a4-4e01-b087-590ba6ff5421-kube-api-access-qx6kc\") pod \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\" (UID: \"2a5eb55e-27a4-4e01-b087-590ba6ff5421\") " Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863886 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863905 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.863917 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7btr4\" (UniqueName: \"kubernetes.io/projected/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d-kube-api-access-7btr4\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.872798 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-scripts" (OuterVolumeSpecName: "scripts") pod "2a5eb55e-27a4-4e01-b087-590ba6ff5421" (UID: "2a5eb55e-27a4-4e01-b087-590ba6ff5421"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.873131 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5eb55e-27a4-4e01-b087-590ba6ff5421-kube-api-access-qx6kc" (OuterVolumeSpecName: "kube-api-access-qx6kc") pod "2a5eb55e-27a4-4e01-b087-590ba6ff5421" (UID: "2a5eb55e-27a4-4e01-b087-590ba6ff5421"). InnerVolumeSpecName "kube-api-access-qx6kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.894460 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-config-data" (OuterVolumeSpecName: "config-data") pod "2a5eb55e-27a4-4e01-b087-590ba6ff5421" (UID: "2a5eb55e-27a4-4e01-b087-590ba6ff5421"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.905173 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5eb55e-27a4-4e01-b087-590ba6ff5421" (UID: "2a5eb55e-27a4-4e01-b087-590ba6ff5421"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.965747 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.965945 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.966133 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5eb55e-27a4-4e01-b087-590ba6ff5421-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:40 crc kubenswrapper[4826]: I0131 07:55:40.966159 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx6kc\" (UniqueName: \"kubernetes.io/projected/2a5eb55e-27a4-4e01-b087-590ba6ff5421-kube-api-access-qx6kc\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.405915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-llhxq" event={"ID":"2a5eb55e-27a4-4e01-b087-590ba6ff5421","Type":"ContainerDied","Data":"ef1bb759f3a7d310faad2e6d467ef563f55b0b69bfab17fc3e493b4f8ab42791"} Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.405985 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef1bb759f3a7d310faad2e6d467ef563f55b0b69bfab17fc3e493b4f8ab42791" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.405944 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-llhxq" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.408862 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d","Type":"ContainerDied","Data":"8673faa64e3d5a8b0e5d07f425e6d7ed3e1148949fa755a6d6e13fd572ed407f"} Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.409441 4826 scope.go:117] "RemoveContainer" containerID="ed265d8295542c4f3fd1aec88201000a7f01ee6bc43547ee8bb8b7d2c1bc1b1f" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.408887 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.415903 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"998defd1-79f9-46b1-8b4f-00b2d99b97c2","Type":"ContainerDied","Data":"630cb3856c842fff81d92643e24dad2caf2361fd05f36da4bd979ff52cabf4b9"} Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.415993 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.466466 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: E0131 07:55:41.466891 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5eb55e-27a4-4e01-b087-590ba6ff5421" containerName="nova-cell1-conductor-db-sync" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.466916 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5eb55e-27a4-4e01-b087-590ba6ff5421" containerName="nova-cell1-conductor-db-sync" Jan 31 07:55:41 crc kubenswrapper[4826]: E0131 07:55:41.466932 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-api" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.466942 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-api" Jan 31 07:55:41 crc kubenswrapper[4826]: E0131 07:55:41.466990 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-log" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467001 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-log" Jan 31 07:55:41 crc kubenswrapper[4826]: E0131 07:55:41.467016 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" containerName="nova-scheduler-scheduler" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467026 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" containerName="nova-scheduler-scheduler" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467241 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" containerName="nova-scheduler-scheduler" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467269 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-api" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467290 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5eb55e-27a4-4e01-b087-590ba6ff5421" containerName="nova-cell1-conductor-db-sync" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467304 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" containerName="nova-api-log" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.467958 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.473258 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.480108 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.520697 4826 scope.go:117] "RemoveContainer" containerID="1e53991d01968c0805db1522b47f5d927b04f0871f4e3e74b063cf47ec01dbe5" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.539175 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.550447 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.567832 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.574582 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.578249 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.578281 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8hs\" (UniqueName: \"kubernetes.io/projected/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-kube-api-access-jp8hs\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.578333 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.578820 4826 scope.go:117] "RemoveContainer" containerID="b748ce92079b87e2284f1bae2f4a521ded4a4b62bc7ea9a88cb02e25066bec17" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.595343 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.596785 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.600227 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.609519 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.612045 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.614700 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.618757 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.628743 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679671 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-config-data\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679727 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx2pb\" (UniqueName: \"kubernetes.io/projected/9db2a83d-6259-43ed-aa63-d8d57f49c850-kube-api-access-bx2pb\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679774 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679815 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db2a83d-6259-43ed-aa63-d8d57f49c850-logs\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679875 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8hs\" (UniqueName: \"kubernetes.io/projected/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-kube-api-access-jp8hs\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.679921 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.685187 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.685640 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.698476 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8hs\" (UniqueName: \"kubernetes.io/projected/7fbb554b-47f1-4266-a6ae-c2d43dd2d692-kube-api-access-jp8hs\") pod \"nova-cell1-conductor-0\" (UID: \"7fbb554b-47f1-4266-a6ae-c2d43dd2d692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.781376 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.781450 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-config-data\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.781474 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9n9\" (UniqueName: \"kubernetes.io/projected/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-kube-api-access-lv9n9\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.781502 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx2pb\" (UniqueName: \"kubernetes.io/projected/9db2a83d-6259-43ed-aa63-d8d57f49c850-kube-api-access-bx2pb\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.781989 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.782083 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-config-data\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.782186 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db2a83d-6259-43ed-aa63-d8d57f49c850-logs\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.782607 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db2a83d-6259-43ed-aa63-d8d57f49c850-logs\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.784715 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-config-data\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.787681 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.798061 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx2pb\" (UniqueName: \"kubernetes.io/projected/9db2a83d-6259-43ed-aa63-d8d57f49c850-kube-api-access-bx2pb\") pod \"nova-api-0\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.830455 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.884374 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.884432 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv9n9\" (UniqueName: \"kubernetes.io/projected/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-kube-api-access-lv9n9\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.884489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-config-data\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.890602 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-config-data\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.894384 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.913806 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv9n9\" (UniqueName: \"kubernetes.io/projected/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-kube-api-access-lv9n9\") pod \"nova-scheduler-0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " pod="openstack/nova-scheduler-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.927385 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:55:41 crc kubenswrapper[4826]: I0131 07:55:41.932181 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.008886 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.009239 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.437127 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 07:55:42 crc kubenswrapper[4826]: W0131 07:55:42.443695 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fbb554b_47f1_4266_a6ae_c2d43dd2d692.slice/crio-eb6199823ba02480c294e6b7a3dfe38ecc45bf4d650520c2d7d090d0111e54dc WatchSource:0}: Error finding container eb6199823ba02480c294e6b7a3dfe38ecc45bf4d650520c2d7d090d0111e54dc: Status 404 returned error can't find the container with id eb6199823ba02480c294e6b7a3dfe38ecc45bf4d650520c2d7d090d0111e54dc Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.450109 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae3a26b8-4b55-49a9-90a6-e66bb00f1425","Type":"ContainerStarted","Data":"b7e5260f3088e207aba5f77f9450f540f2531af58813ab5c0ca3445d5dc4515c"} Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.450496 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.476588 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.386606201 podStartE2EDuration="3.476563936s" podCreationTimestamp="2026-01-31 07:55:39 +0000 UTC" firstStartedPulling="2026-01-31 07:55:40.286271062 +0000 UTC m=+1172.140157421" lastFinishedPulling="2026-01-31 07:55:41.376228797 +0000 UTC m=+1173.230115156" observedRunningTime="2026-01-31 07:55:42.46576832 +0000 UTC m=+1174.319654699" watchObservedRunningTime="2026-01-31 07:55:42.476563936 +0000 UTC m=+1174.330450305" Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.510700 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.524957 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.819031 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d" path="/var/lib/kubelet/pods/0a92c32b-6fbb-4dc2-ba1f-bd86d3e7ae6d/volumes" Jan 31 07:55:42 crc kubenswrapper[4826]: I0131 07:55:42.819728 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998defd1-79f9-46b1-8b4f-00b2d99b97c2" path="/var/lib/kubelet/pods/998defd1-79f9-46b1-8b4f-00b2d99b97c2/volumes" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.464348 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0","Type":"ContainerStarted","Data":"b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.464689 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0","Type":"ContainerStarted","Data":"4bcb0c00a7e010ba9f8275d7bdb37b8072b270652a2c7dff5065f35f33e3b005"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.471534 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9db2a83d-6259-43ed-aa63-d8d57f49c850","Type":"ContainerStarted","Data":"966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.471577 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9db2a83d-6259-43ed-aa63-d8d57f49c850","Type":"ContainerStarted","Data":"96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.471586 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9db2a83d-6259-43ed-aa63-d8d57f49c850","Type":"ContainerStarted","Data":"36fc6e8f4ecf79c44a5e9c93e60daa6625f89dcd604b0a36b101b3b4be2cba5f"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.474095 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fbb554b-47f1-4266-a6ae-c2d43dd2d692","Type":"ContainerStarted","Data":"77417fb6f556f15d880e1ded556af00009298cbe5fd327db9022de80fca6504d"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.474133 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fbb554b-47f1-4266-a6ae-c2d43dd2d692","Type":"ContainerStarted","Data":"eb6199823ba02480c294e6b7a3dfe38ecc45bf4d650520c2d7d090d0111e54dc"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.474174 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.477634 4826 generic.go:334] "Generic (PLEG): container finished" podID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerID="9a2a5e9e415f06e85eaaaafb5540e25f36feccfd015f9e51d464bda14c265105" exitCode=0 Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.477806 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerDied","Data":"9a2a5e9e415f06e85eaaaafb5540e25f36feccfd015f9e51d464bda14c265105"} Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.493344 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.493326266 podStartE2EDuration="2.493326266s" podCreationTimestamp="2026-01-31 07:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:43.490552078 +0000 UTC m=+1175.344438447" watchObservedRunningTime="2026-01-31 07:55:43.493326266 +0000 UTC m=+1175.347212625" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.511717 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.511703857 podStartE2EDuration="2.511703857s" podCreationTimestamp="2026-01-31 07:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:43.507131057 +0000 UTC m=+1175.361017416" watchObservedRunningTime="2026-01-31 07:55:43.511703857 +0000 UTC m=+1175.365590216" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.541281 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5412607339999997 podStartE2EDuration="2.541260734s" podCreationTimestamp="2026-01-31 07:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:55:43.528477342 +0000 UTC m=+1175.382363701" watchObservedRunningTime="2026-01-31 07:55:43.541260734 +0000 UTC m=+1175.395147093" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.584902 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732276 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-sg-core-conf-yaml\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732357 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-combined-ca-bundle\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732410 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdhjl\" (UniqueName: \"kubernetes.io/projected/529222c0-d23d-4cb3-b410-ecbeada1454d-kube-api-access-pdhjl\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732523 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-run-httpd\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732576 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-scripts\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732614 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-config-data\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.732667 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-log-httpd\") pod \"529222c0-d23d-4cb3-b410-ecbeada1454d\" (UID: \"529222c0-d23d-4cb3-b410-ecbeada1454d\") " Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.733787 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.733912 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.738145 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-scripts" (OuterVolumeSpecName: "scripts") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.742488 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529222c0-d23d-4cb3-b410-ecbeada1454d-kube-api-access-pdhjl" (OuterVolumeSpecName: "kube-api-access-pdhjl") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "kube-api-access-pdhjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.767517 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.821929 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.830702 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-config-data" (OuterVolumeSpecName: "config-data") pod "529222c0-d23d-4cb3-b410-ecbeada1454d" (UID: "529222c0-d23d-4cb3-b410-ecbeada1454d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835108 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835157 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835171 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835183 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835196 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529222c0-d23d-4cb3-b410-ecbeada1454d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835208 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdhjl\" (UniqueName: \"kubernetes.io/projected/529222c0-d23d-4cb3-b410-ecbeada1454d-kube-api-access-pdhjl\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:43 crc kubenswrapper[4826]: I0131 07:55:43.835220 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/529222c0-d23d-4cb3-b410-ecbeada1454d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.488544 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.488580 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"529222c0-d23d-4cb3-b410-ecbeada1454d","Type":"ContainerDied","Data":"52f3dda433c139b5af37a1d7ed3ea94b384d72a782cf49e990f6c857744ce67a"} Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.489537 4826 scope.go:117] "RemoveContainer" containerID="9edb276bcf73a5fa1026194be539d0c5e076acd9f63346341f451d1a0790e35c" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.515304 4826 scope.go:117] "RemoveContainer" containerID="2f1acc3bcff852d0ddf6dda90adabf339139977f3cdd14065e86b8e402c4270e" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.562433 4826 scope.go:117] "RemoveContainer" containerID="9a2a5e9e415f06e85eaaaafb5540e25f36feccfd015f9e51d464bda14c265105" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.584427 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.596413 4826 scope.go:117] "RemoveContainer" containerID="2e331ab7d836704e1ac8a6e489b6da6a1ed4aa2e85372659ce62d0f58ee99a87" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.600413 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.607962 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:44 crc kubenswrapper[4826]: E0131 07:55:44.608402 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="proxy-httpd" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608421 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="proxy-httpd" Jan 31 07:55:44 crc kubenswrapper[4826]: E0131 07:55:44.608437 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-central-agent" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608443 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-central-agent" Jan 31 07:55:44 crc kubenswrapper[4826]: E0131 07:55:44.608457 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-notification-agent" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608465 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-notification-agent" Jan 31 07:55:44 crc kubenswrapper[4826]: E0131 07:55:44.608486 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="sg-core" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608492 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="sg-core" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608662 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-notification-agent" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608680 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="ceilometer-central-agent" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608695 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="sg-core" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.608704 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" containerName="proxy-httpd" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.610567 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.613183 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.613607 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.614893 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.619179 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.755914 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756082 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756219 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-config-data\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756283 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-scripts\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756323 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756365 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-kube-api-access-t8dpk\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756433 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-log-httpd\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.756583 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-run-httpd\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.827489 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529222c0-d23d-4cb3-b410-ecbeada1454d" path="/var/lib/kubelet/pods/529222c0-d23d-4cb3-b410-ecbeada1454d/volumes" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858168 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-config-data\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858227 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-scripts\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858246 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-kube-api-access-t8dpk\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858297 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-log-httpd\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.858339 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-run-httpd\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.859479 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-run-httpd\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.860217 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-log-httpd\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.862562 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.864057 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.866453 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-config-data\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.873899 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.879388 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-scripts\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.880879 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-kube-api-access-t8dpk\") pod \"ceilometer-0\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " pod="openstack/ceilometer-0" Jan 31 07:55:44 crc kubenswrapper[4826]: I0131 07:55:44.938385 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:55:45 crc kubenswrapper[4826]: I0131 07:55:45.402865 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:55:45 crc kubenswrapper[4826]: I0131 07:55:45.496911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerStarted","Data":"f11c5530cc9f7b3bbe9ff003124218178c2bd7497670dcad7c613497374551c1"} Jan 31 07:55:46 crc kubenswrapper[4826]: I0131 07:55:46.512960 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerStarted","Data":"e76ffd0ca2e7e7e41c7ba333b88186fe9594b53d3b21aadf52c9a4aa53c5425b"} Jan 31 07:55:46 crc kubenswrapper[4826]: I0131 07:55:46.932981 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 07:55:47 crc kubenswrapper[4826]: I0131 07:55:47.008489 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:55:47 crc kubenswrapper[4826]: I0131 07:55:47.008580 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:55:47 crc kubenswrapper[4826]: I0131 07:55:47.523532 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerStarted","Data":"a65d4dd109453a3ecf126a6ed60ca5a17df5e1b57493da86861e8d846d5b51d2"} Jan 31 07:55:48 crc kubenswrapper[4826]: I0131 07:55:48.023173 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:55:48 crc kubenswrapper[4826]: I0131 07:55:48.023187 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:55:49 crc kubenswrapper[4826]: I0131 07:55:49.553585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerStarted","Data":"211bf6f0c18c338a9c29abc66838f05ece3ab3d331490cbdedf5f2c5afe6ac8b"} Jan 31 07:55:49 crc kubenswrapper[4826]: I0131 07:55:49.800944 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.575574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerStarted","Data":"92464dad62f4dbcff081306ee47c38bb3b2c9500d2143b8d4e36ad72780fd699"} Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.576232 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.604900 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.753212781 podStartE2EDuration="7.604878956s" podCreationTimestamp="2026-01-31 07:55:44 +0000 UTC" firstStartedPulling="2026-01-31 07:55:45.408127046 +0000 UTC m=+1177.262013405" lastFinishedPulling="2026-01-31 07:55:51.259793221 +0000 UTC m=+1183.113679580" observedRunningTime="2026-01-31 07:55:51.59971844 +0000 UTC m=+1183.453604789" watchObservedRunningTime="2026-01-31 07:55:51.604878956 +0000 UTC m=+1183.458765315" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.879158 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.928111 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.928201 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.933250 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 07:55:51 crc kubenswrapper[4826]: I0131 07:55:51.961710 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 07:55:52 crc kubenswrapper[4826]: I0131 07:55:52.612271 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 07:55:53 crc kubenswrapper[4826]: I0131 07:55:53.011180 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:55:53 crc kubenswrapper[4826]: I0131 07:55:53.011184 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:55:57 crc kubenswrapper[4826]: I0131 07:55:57.013288 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:55:57 crc kubenswrapper[4826]: I0131 07:55:57.015364 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:55:57 crc kubenswrapper[4826]: I0131 07:55:57.020350 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:55:57 crc kubenswrapper[4826]: I0131 07:55:57.376683 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:55:57 crc kubenswrapper[4826]: I0131 07:55:57.376766 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:55:57 crc kubenswrapper[4826]: I0131 07:55:57.636043 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.647375 4826 generic.go:334] "Generic (PLEG): container finished" podID="2c0bb381-a294-472b-bd7c-db1a23b96118" containerID="ea8915165064a4c3c900c95cd1e0d492ae7bfdb310b0cab003cf2db2e9053331" exitCode=137 Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.647552 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c0bb381-a294-472b-bd7c-db1a23b96118","Type":"ContainerDied","Data":"ea8915165064a4c3c900c95cd1e0d492ae7bfdb310b0cab003cf2db2e9053331"} Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.648072 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c0bb381-a294-472b-bd7c-db1a23b96118","Type":"ContainerDied","Data":"99907ee1671ffa9de95b214233e266e04630de292617fc63a4565b5f122f055e"} Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.648109 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99907ee1671ffa9de95b214233e266e04630de292617fc63a4565b5f122f055e" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.691767 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.785051 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-combined-ca-bundle\") pod \"2c0bb381-a294-472b-bd7c-db1a23b96118\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.785087 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm64f\" (UniqueName: \"kubernetes.io/projected/2c0bb381-a294-472b-bd7c-db1a23b96118-kube-api-access-vm64f\") pod \"2c0bb381-a294-472b-bd7c-db1a23b96118\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.785135 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-config-data\") pod \"2c0bb381-a294-472b-bd7c-db1a23b96118\" (UID: \"2c0bb381-a294-472b-bd7c-db1a23b96118\") " Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.791196 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0bb381-a294-472b-bd7c-db1a23b96118-kube-api-access-vm64f" (OuterVolumeSpecName: "kube-api-access-vm64f") pod "2c0bb381-a294-472b-bd7c-db1a23b96118" (UID: "2c0bb381-a294-472b-bd7c-db1a23b96118"). InnerVolumeSpecName "kube-api-access-vm64f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.815335 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-config-data" (OuterVolumeSpecName: "config-data") pod "2c0bb381-a294-472b-bd7c-db1a23b96118" (UID: "2c0bb381-a294-472b-bd7c-db1a23b96118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.818404 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c0bb381-a294-472b-bd7c-db1a23b96118" (UID: "2c0bb381-a294-472b-bd7c-db1a23b96118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.890355 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.890405 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm64f\" (UniqueName: \"kubernetes.io/projected/2c0bb381-a294-472b-bd7c-db1a23b96118-kube-api-access-vm64f\") on node \"crc\" DevicePath \"\"" Jan 31 07:55:59 crc kubenswrapper[4826]: I0131 07:55:59.890426 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bb381-a294-472b-bd7c-db1a23b96118-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.659010 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.716761 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.728442 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.745715 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:56:00 crc kubenswrapper[4826]: E0131 07:56:00.746246 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0bb381-a294-472b-bd7c-db1a23b96118" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.746262 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0bb381-a294-472b-bd7c-db1a23b96118" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.746453 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0bb381-a294-472b-bd7c-db1a23b96118" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.747062 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.749061 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.749236 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.749933 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.762343 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.819333 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0bb381-a294-472b-bd7c-db1a23b96118" path="/var/lib/kubelet/pods/2c0bb381-a294-472b-bd7c-db1a23b96118/volumes" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.908243 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42z6r\" (UniqueName: \"kubernetes.io/projected/0e0433b9-901b-4383-8d8b-15e5c006da15-kube-api-access-42z6r\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.908682 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.908850 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.908934 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:00 crc kubenswrapper[4826]: I0131 07:56:00.909075 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.010772 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.011257 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.011303 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.011444 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42z6r\" (UniqueName: \"kubernetes.io/projected/0e0433b9-901b-4383-8d8b-15e5c006da15-kube-api-access-42z6r\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.011502 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.017657 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.017709 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.026345 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.027834 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0433b9-901b-4383-8d8b-15e5c006da15-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.034544 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42z6r\" (UniqueName: \"kubernetes.io/projected/0e0433b9-901b-4383-8d8b-15e5c006da15-kube-api-access-42z6r\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e0433b9-901b-4383-8d8b-15e5c006da15\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.072275 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.515710 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.670475 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0e0433b9-901b-4383-8d8b-15e5c006da15","Type":"ContainerStarted","Data":"5d88720c9c09718c19e7da48469991c498f959f7bcead152e436d70a0488015a"} Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.934945 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.936218 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.937791 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:56:01 crc kubenswrapper[4826]: I0131 07:56:01.945632 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.683758 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0e0433b9-901b-4383-8d8b-15e5c006da15","Type":"ContainerStarted","Data":"247e48c9f4ed5d9592ccd8e536197dd2f7d1cf08f24ac611f03f12e72e40fa95"} Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.684429 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.695486 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.714433 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.714407896 podStartE2EDuration="2.714407896s" podCreationTimestamp="2026-01-31 07:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:02.704677051 +0000 UTC m=+1194.558563430" watchObservedRunningTime="2026-01-31 07:56:02.714407896 +0000 UTC m=+1194.568294285" Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.908041 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-72zkq"] Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.910049 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:02 crc kubenswrapper[4826]: I0131 07:56:02.935181 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-72zkq"] Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.066857 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-config\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.066929 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.066987 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.067059 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.067177 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j695h\" (UniqueName: \"kubernetes.io/projected/4b7f36bf-3ed8-4a40-9306-541c026c91dd-kube-api-access-j695h\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.168133 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j695h\" (UniqueName: \"kubernetes.io/projected/4b7f36bf-3ed8-4a40-9306-541c026c91dd-kube-api-access-j695h\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.168206 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-config\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.168233 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.168264 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.168312 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.169460 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.169499 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.169595 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-config\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.169749 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.198756 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j695h\" (UniqueName: \"kubernetes.io/projected/4b7f36bf-3ed8-4a40-9306-541c026c91dd-kube-api-access-j695h\") pod \"dnsmasq-dns-68d4b6d797-72zkq\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.234415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:03 crc kubenswrapper[4826]: I0131 07:56:03.727923 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-72zkq"] Jan 31 07:56:03 crc kubenswrapper[4826]: W0131 07:56:03.728257 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b7f36bf_3ed8_4a40_9306_541c026c91dd.slice/crio-391f98e9e22a56dd18681b126d7b2c12807d1857aa3b229688cbeaf0ccdfd8f7 WatchSource:0}: Error finding container 391f98e9e22a56dd18681b126d7b2c12807d1857aa3b229688cbeaf0ccdfd8f7: Status 404 returned error can't find the container with id 391f98e9e22a56dd18681b126d7b2c12807d1857aa3b229688cbeaf0ccdfd8f7 Jan 31 07:56:04 crc kubenswrapper[4826]: I0131 07:56:04.699753 4826 generic.go:334] "Generic (PLEG): container finished" podID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerID="25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5" exitCode=0 Jan 31 07:56:04 crc kubenswrapper[4826]: I0131 07:56:04.699827 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" event={"ID":"4b7f36bf-3ed8-4a40-9306-541c026c91dd","Type":"ContainerDied","Data":"25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5"} Jan 31 07:56:04 crc kubenswrapper[4826]: I0131 07:56:04.700121 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" event={"ID":"4b7f36bf-3ed8-4a40-9306-541c026c91dd","Type":"ContainerStarted","Data":"391f98e9e22a56dd18681b126d7b2c12807d1857aa3b229688cbeaf0ccdfd8f7"} Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.053578 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.054475 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-central-agent" containerID="cri-o://e76ffd0ca2e7e7e41c7ba333b88186fe9594b53d3b21aadf52c9a4aa53c5425b" gracePeriod=30 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.054589 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-notification-agent" containerID="cri-o://a65d4dd109453a3ecf126a6ed60ca5a17df5e1b57493da86861e8d846d5b51d2" gracePeriod=30 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.054626 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="proxy-httpd" containerID="cri-o://92464dad62f4dbcff081306ee47c38bb3b2c9500d2143b8d4e36ad72780fd699" gracePeriod=30 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.054599 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="sg-core" containerID="cri-o://211bf6f0c18c338a9c29abc66838f05ece3ab3d331490cbdedf5f2c5afe6ac8b" gracePeriod=30 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.075128 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.152367 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.719499 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerID="92464dad62f4dbcff081306ee47c38bb3b2c9500d2143b8d4e36ad72780fd699" exitCode=0 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.719546 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerID="211bf6f0c18c338a9c29abc66838f05ece3ab3d331490cbdedf5f2c5afe6ac8b" exitCode=2 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.719556 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerID="e76ffd0ca2e7e7e41c7ba333b88186fe9594b53d3b21aadf52c9a4aa53c5425b" exitCode=0 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.719619 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerDied","Data":"92464dad62f4dbcff081306ee47c38bb3b2c9500d2143b8d4e36ad72780fd699"} Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.719654 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerDied","Data":"211bf6f0c18c338a9c29abc66838f05ece3ab3d331490cbdedf5f2c5afe6ac8b"} Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.719677 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerDied","Data":"e76ffd0ca2e7e7e41c7ba333b88186fe9594b53d3b21aadf52c9a4aa53c5425b"} Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.724811 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" event={"ID":"4b7f36bf-3ed8-4a40-9306-541c026c91dd","Type":"ContainerStarted","Data":"17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3"} Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.724865 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-log" containerID="cri-o://96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de" gracePeriod=30 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.725022 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-api" containerID="cri-o://966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130" gracePeriod=30 Jan 31 07:56:05 crc kubenswrapper[4826]: I0131 07:56:05.757817 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" podStartSLOduration=3.757796904 podStartE2EDuration="3.757796904s" podCreationTimestamp="2026-01-31 07:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:05.745240818 +0000 UTC m=+1197.599127167" watchObservedRunningTime="2026-01-31 07:56:05.757796904 +0000 UTC m=+1197.611683273" Jan 31 07:56:06 crc kubenswrapper[4826]: I0131 07:56:06.072994 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:06 crc kubenswrapper[4826]: I0131 07:56:06.746770 4826 generic.go:334] "Generic (PLEG): container finished" podID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerID="96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de" exitCode=143 Jan 31 07:56:06 crc kubenswrapper[4826]: I0131 07:56:06.747339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9db2a83d-6259-43ed-aa63-d8d57f49c850","Type":"ContainerDied","Data":"96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de"} Jan 31 07:56:06 crc kubenswrapper[4826]: I0131 07:56:06.747465 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:08 crc kubenswrapper[4826]: I0131 07:56:08.770594 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerID="a65d4dd109453a3ecf126a6ed60ca5a17df5e1b57493da86861e8d846d5b51d2" exitCode=0 Jan 31 07:56:08 crc kubenswrapper[4826]: I0131 07:56:08.771045 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerDied","Data":"a65d4dd109453a3ecf126a6ed60ca5a17df5e1b57493da86861e8d846d5b51d2"} Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.075819 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191228 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-ceilometer-tls-certs\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191349 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-sg-core-conf-yaml\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191409 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-run-httpd\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191436 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-kube-api-access-t8dpk\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191468 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-scripts\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191495 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-config-data\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191571 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-combined-ca-bundle\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.191654 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-log-httpd\") pod \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\" (UID: \"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.192724 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.197902 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-scripts" (OuterVolumeSpecName: "scripts") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.204094 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-kube-api-access-t8dpk" (OuterVolumeSpecName: "kube-api-access-t8dpk") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "kube-api-access-t8dpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.204656 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.223560 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.242637 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.268145 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295599 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295630 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295645 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295656 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-kube-api-access-t8dpk\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295666 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295676 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.295684 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.312155 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-config-data" (OuterVolumeSpecName: "config-data") pod "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" (UID: "5ad6fa82-7626-4ba2-a206-eb91edcfb0bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.396874 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.398812 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.497553 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-config-data\") pod \"9db2a83d-6259-43ed-aa63-d8d57f49c850\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.497775 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx2pb\" (UniqueName: \"kubernetes.io/projected/9db2a83d-6259-43ed-aa63-d8d57f49c850-kube-api-access-bx2pb\") pod \"9db2a83d-6259-43ed-aa63-d8d57f49c850\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.498212 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db2a83d-6259-43ed-aa63-d8d57f49c850-logs\") pod \"9db2a83d-6259-43ed-aa63-d8d57f49c850\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.498260 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-combined-ca-bundle\") pod \"9db2a83d-6259-43ed-aa63-d8d57f49c850\" (UID: \"9db2a83d-6259-43ed-aa63-d8d57f49c850\") " Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.498784 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db2a83d-6259-43ed-aa63-d8d57f49c850-logs" (OuterVolumeSpecName: "logs") pod "9db2a83d-6259-43ed-aa63-d8d57f49c850" (UID: "9db2a83d-6259-43ed-aa63-d8d57f49c850"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.502405 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db2a83d-6259-43ed-aa63-d8d57f49c850-kube-api-access-bx2pb" (OuterVolumeSpecName: "kube-api-access-bx2pb") pod "9db2a83d-6259-43ed-aa63-d8d57f49c850" (UID: "9db2a83d-6259-43ed-aa63-d8d57f49c850"). InnerVolumeSpecName "kube-api-access-bx2pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.524396 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-config-data" (OuterVolumeSpecName: "config-data") pod "9db2a83d-6259-43ed-aa63-d8d57f49c850" (UID: "9db2a83d-6259-43ed-aa63-d8d57f49c850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.530178 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9db2a83d-6259-43ed-aa63-d8d57f49c850" (UID: "9db2a83d-6259-43ed-aa63-d8d57f49c850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.600735 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.600775 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx2pb\" (UniqueName: \"kubernetes.io/projected/9db2a83d-6259-43ed-aa63-d8d57f49c850-kube-api-access-bx2pb\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.600790 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db2a83d-6259-43ed-aa63-d8d57f49c850-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.600802 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db2a83d-6259-43ed-aa63-d8d57f49c850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.786013 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad6fa82-7626-4ba2-a206-eb91edcfb0bb","Type":"ContainerDied","Data":"f11c5530cc9f7b3bbe9ff003124218178c2bd7497670dcad7c613497374551c1"} Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.786076 4826 scope.go:117] "RemoveContainer" containerID="92464dad62f4dbcff081306ee47c38bb3b2c9500d2143b8d4e36ad72780fd699" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.786134 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.790718 4826 generic.go:334] "Generic (PLEG): container finished" podID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerID="966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130" exitCode=0 Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.790787 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9db2a83d-6259-43ed-aa63-d8d57f49c850","Type":"ContainerDied","Data":"966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130"} Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.790827 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9db2a83d-6259-43ed-aa63-d8d57f49c850","Type":"ContainerDied","Data":"36fc6e8f4ecf79c44a5e9c93e60daa6625f89dcd604b0a36b101b3b4be2cba5f"} Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.790920 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.813924 4826 scope.go:117] "RemoveContainer" containerID="211bf6f0c18c338a9c29abc66838f05ece3ab3d331490cbdedf5f2c5afe6ac8b" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.832397 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.842270 4826 scope.go:117] "RemoveContainer" containerID="a65d4dd109453a3ecf126a6ed60ca5a17df5e1b57493da86861e8d846d5b51d2" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.844097 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.858635 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.884408 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.889698 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: E0131 07:56:09.890163 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-log" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890184 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-log" Jan 31 07:56:09 crc kubenswrapper[4826]: E0131 07:56:09.890205 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="sg-core" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890213 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="sg-core" Jan 31 07:56:09 crc kubenswrapper[4826]: E0131 07:56:09.890226 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="proxy-httpd" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890235 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="proxy-httpd" Jan 31 07:56:09 crc kubenswrapper[4826]: E0131 07:56:09.890247 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-central-agent" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890254 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-central-agent" Jan 31 07:56:09 crc kubenswrapper[4826]: E0131 07:56:09.890268 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-notification-agent" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890275 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-notification-agent" Jan 31 07:56:09 crc kubenswrapper[4826]: E0131 07:56:09.890283 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-api" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890291 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-api" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890561 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-notification-agent" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890580 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="ceilometer-central-agent" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890596 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-log" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890617 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="proxy-httpd" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890626 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" containerName="sg-core" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.890636 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" containerName="nova-api-api" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.892555 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.903928 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.906486 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.906732 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.906858 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.911797 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.918831 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.918873 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.919044 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.919089 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.941075 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.965723 4826 scope.go:117] "RemoveContainer" containerID="e76ffd0ca2e7e7e41c7ba333b88186fe9594b53d3b21aadf52c9a4aa53c5425b" Jan 31 07:56:09 crc kubenswrapper[4826]: I0131 07:56:09.984988 4826 scope.go:117] "RemoveContainer" containerID="966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.006016 4826 scope.go:117] "RemoveContainer" containerID="96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-config-data\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-scripts\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009533 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009575 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd43c512-3240-42fe-ab90-6ea4d8c8da60-logs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009635 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-run-httpd\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009652 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009686 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-log-httpd\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009735 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-config-data\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009766 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009788 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6ts\" (UniqueName: \"kubernetes.io/projected/10c57bd7-46e8-437c-9e84-e77bcdf5561a-kube-api-access-rf6ts\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009805 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.009913 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.010095 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.010119 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxvf\" (UniqueName: \"kubernetes.io/projected/bd43c512-3240-42fe-ab90-6ea4d8c8da60-kube-api-access-6zxvf\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.022527 4826 scope.go:117] "RemoveContainer" containerID="966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130" Jan 31 07:56:10 crc kubenswrapper[4826]: E0131 07:56:10.024558 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130\": container with ID starting with 966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130 not found: ID does not exist" containerID="966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.024594 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130"} err="failed to get container status \"966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130\": rpc error: code = NotFound desc = could not find container \"966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130\": container with ID starting with 966be199f40971c2bbee1de716771194bbea64d6fb37115c7dfc0034d13d1130 not found: ID does not exist" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.024617 4826 scope.go:117] "RemoveContainer" containerID="96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de" Jan 31 07:56:10 crc kubenswrapper[4826]: E0131 07:56:10.026099 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de\": container with ID starting with 96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de not found: ID does not exist" containerID="96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.026165 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de"} err="failed to get container status \"96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de\": rpc error: code = NotFound desc = could not find container \"96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de\": container with ID starting with 96931690e7c7b317a0dbaf0a060fcea51c08099536605187f28d0b60e45b21de not found: ID does not exist" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111250 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111531 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxvf\" (UniqueName: \"kubernetes.io/projected/bd43c512-3240-42fe-ab90-6ea4d8c8da60-kube-api-access-6zxvf\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111609 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-config-data\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111646 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-scripts\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111671 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111697 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd43c512-3240-42fe-ab90-6ea4d8c8da60-logs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111760 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-run-httpd\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111778 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111808 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-log-httpd\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111829 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-config-data\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111852 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111870 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6ts\" (UniqueName: \"kubernetes.io/projected/10c57bd7-46e8-437c-9e84-e77bcdf5561a-kube-api-access-rf6ts\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111888 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.111917 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.115029 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-log-httpd\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.116095 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.116876 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd43c512-3240-42fe-ab90-6ea4d8c8da60-logs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.117469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-config-data\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.117623 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.117628 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.117959 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-run-httpd\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.118698 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.119464 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-scripts\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.121703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.121805 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.121823 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-config-data\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.130523 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxvf\" (UniqueName: \"kubernetes.io/projected/bd43c512-3240-42fe-ab90-6ea4d8c8da60-kube-api-access-6zxvf\") pod \"nova-api-0\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.136011 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6ts\" (UniqueName: \"kubernetes.io/projected/10c57bd7-46e8-437c-9e84-e77bcdf5561a-kube-api-access-rf6ts\") pod \"ceilometer-0\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.510959 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.514159 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.822361 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad6fa82-7626-4ba2-a206-eb91edcfb0bb" path="/var/lib/kubelet/pods/5ad6fa82-7626-4ba2-a206-eb91edcfb0bb/volumes" Jan 31 07:56:10 crc kubenswrapper[4826]: I0131 07:56:10.823493 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db2a83d-6259-43ed-aa63-d8d57f49c850" path="/var/lib/kubelet/pods/9db2a83d-6259-43ed-aa63-d8d57f49c850/volumes" Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.041267 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 07:56:11 crc kubenswrapper[4826]: W0131 07:56:11.045157 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10c57bd7_46e8_437c_9e84_e77bcdf5561a.slice/crio-50948bf523f3d51e8d2e716dd6ce3e63f35026a8e8084e6b128c01f5627b17a5 WatchSource:0}: Error finding container 50948bf523f3d51e8d2e716dd6ce3e63f35026a8e8084e6b128c01f5627b17a5: Status 404 returned error can't find the container with id 50948bf523f3d51e8d2e716dd6ce3e63f35026a8e8084e6b128c01f5627b17a5 Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.073006 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.109184 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:11 crc kubenswrapper[4826]: W0131 07:56:11.112271 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd43c512_3240_42fe_ab90_6ea4d8c8da60.slice/crio-c7cb4dcaddeb92c224d497cf1142a4c589ddc7b8643970060e99238150fbd5ff WatchSource:0}: Error finding container c7cb4dcaddeb92c224d497cf1142a4c589ddc7b8643970060e99238150fbd5ff: Status 404 returned error can't find the container with id c7cb4dcaddeb92c224d497cf1142a4c589ddc7b8643970060e99238150fbd5ff Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.114917 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.809632 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd43c512-3240-42fe-ab90-6ea4d8c8da60","Type":"ContainerStarted","Data":"91ed0842cca456b1df8d5365289f22ace51936bd5301af4976fe55d96def6a0c"} Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.810027 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd43c512-3240-42fe-ab90-6ea4d8c8da60","Type":"ContainerStarted","Data":"18b8bd5e1c631fcaafc41311e64600fa51376e728a96a4b138cb48f4f24a936b"} Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.810044 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd43c512-3240-42fe-ab90-6ea4d8c8da60","Type":"ContainerStarted","Data":"c7cb4dcaddeb92c224d497cf1142a4c589ddc7b8643970060e99238150fbd5ff"} Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.813167 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerStarted","Data":"50948bf523f3d51e8d2e716dd6ce3e63f35026a8e8084e6b128c01f5627b17a5"} Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.836159 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 07:56:11 crc kubenswrapper[4826]: I0131 07:56:11.838685 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.838663473 podStartE2EDuration="2.838663473s" podCreationTimestamp="2026-01-31 07:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:11.826610471 +0000 UTC m=+1203.680496850" watchObservedRunningTime="2026-01-31 07:56:11.838663473 +0000 UTC m=+1203.692549832" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.022746 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lvhwh"] Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.024328 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.026983 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.028368 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.043011 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvhwh"] Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.052354 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-scripts\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.052402 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmsc9\" (UniqueName: \"kubernetes.io/projected/c688d561-57f0-42dd-9559-ca31e0086d13-kube-api-access-bmsc9\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.052432 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-config-data\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.052447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.154124 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-scripts\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.154175 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmsc9\" (UniqueName: \"kubernetes.io/projected/c688d561-57f0-42dd-9559-ca31e0086d13-kube-api-access-bmsc9\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.154202 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-config-data\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.154216 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.158920 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-scripts\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.161542 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.161759 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-config-data\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.176355 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmsc9\" (UniqueName: \"kubernetes.io/projected/c688d561-57f0-42dd-9559-ca31e0086d13-kube-api-access-bmsc9\") pod \"nova-cell1-cell-mapping-lvhwh\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.348651 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.833598 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerStarted","Data":"04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590"} Jan 31 07:56:12 crc kubenswrapper[4826]: I0131 07:56:12.834074 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvhwh"] Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.237260 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.332546 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-svkk9"] Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.332789 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerName="dnsmasq-dns" containerID="cri-o://14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848" gracePeriod=10 Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.824493 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.842617 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvhwh" event={"ID":"c688d561-57f0-42dd-9559-ca31e0086d13","Type":"ContainerStarted","Data":"17247f34afb957b4a609780bce8bb7b71620028e3bf0838bc8f96e5b0ae52c94"} Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.842674 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvhwh" event={"ID":"c688d561-57f0-42dd-9559-ca31e0086d13","Type":"ContainerStarted","Data":"6042572169e7d22a24fe74b8372a14a9ce99523e42a1128c70387e6b31ddacd7"} Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.848996 4826 generic.go:334] "Generic (PLEG): container finished" podID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerID="14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848" exitCode=0 Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.849096 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" event={"ID":"281ee141-2543-4d23-a1d6-cb0d972a05e6","Type":"ContainerDied","Data":"14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848"} Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.849129 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" event={"ID":"281ee141-2543-4d23-a1d6-cb0d972a05e6","Type":"ContainerDied","Data":"d7c43781d09b2279b4480ec204bcae0a6e43192c077a52fb11d4fd0ca953b297"} Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.849150 4826 scope.go:117] "RemoveContainer" containerID="14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.849919 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-svkk9" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.859769 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerStarted","Data":"ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266"} Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.876525 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lvhwh" podStartSLOduration=2.876506787 podStartE2EDuration="2.876506787s" podCreationTimestamp="2026-01-31 07:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:13.873223654 +0000 UTC m=+1205.727110013" watchObservedRunningTime="2026-01-31 07:56:13.876506787 +0000 UTC m=+1205.730393146" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.890293 4826 scope.go:117] "RemoveContainer" containerID="5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.927153 4826 scope.go:117] "RemoveContainer" containerID="14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848" Jan 31 07:56:13 crc kubenswrapper[4826]: E0131 07:56:13.928106 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848\": container with ID starting with 14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848 not found: ID does not exist" containerID="14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.928153 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848"} err="failed to get container status \"14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848\": rpc error: code = NotFound desc = could not find container \"14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848\": container with ID starting with 14f0e168fdeb132d233c750e61674fe96ed151a5db2649dbf8391d760657d848 not found: ID does not exist" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.928180 4826 scope.go:117] "RemoveContainer" containerID="5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c" Jan 31 07:56:13 crc kubenswrapper[4826]: E0131 07:56:13.929174 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c\": container with ID starting with 5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c not found: ID does not exist" containerID="5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c" Jan 31 07:56:13 crc kubenswrapper[4826]: I0131 07:56:13.929226 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c"} err="failed to get container status \"5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c\": rpc error: code = NotFound desc = could not find container \"5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c\": container with ID starting with 5f43dcf5228f22d393c916b6e4eccf95640ab486a5358f2d81bb35267965e34c not found: ID does not exist" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.008580 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-sb\") pod \"281ee141-2543-4d23-a1d6-cb0d972a05e6\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.008952 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4tpf\" (UniqueName: \"kubernetes.io/projected/281ee141-2543-4d23-a1d6-cb0d972a05e6-kube-api-access-q4tpf\") pod \"281ee141-2543-4d23-a1d6-cb0d972a05e6\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.009063 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-dns-svc\") pod \"281ee141-2543-4d23-a1d6-cb0d972a05e6\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.009293 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-config\") pod \"281ee141-2543-4d23-a1d6-cb0d972a05e6\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.009433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-nb\") pod \"281ee141-2543-4d23-a1d6-cb0d972a05e6\" (UID: \"281ee141-2543-4d23-a1d6-cb0d972a05e6\") " Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.039453 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281ee141-2543-4d23-a1d6-cb0d972a05e6-kube-api-access-q4tpf" (OuterVolumeSpecName: "kube-api-access-q4tpf") pod "281ee141-2543-4d23-a1d6-cb0d972a05e6" (UID: "281ee141-2543-4d23-a1d6-cb0d972a05e6"). InnerVolumeSpecName "kube-api-access-q4tpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.072206 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "281ee141-2543-4d23-a1d6-cb0d972a05e6" (UID: "281ee141-2543-4d23-a1d6-cb0d972a05e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.081655 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "281ee141-2543-4d23-a1d6-cb0d972a05e6" (UID: "281ee141-2543-4d23-a1d6-cb0d972a05e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.090131 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "281ee141-2543-4d23-a1d6-cb0d972a05e6" (UID: "281ee141-2543-4d23-a1d6-cb0d972a05e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.095812 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-config" (OuterVolumeSpecName: "config") pod "281ee141-2543-4d23-a1d6-cb0d972a05e6" (UID: "281ee141-2543-4d23-a1d6-cb0d972a05e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.113670 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.113715 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4tpf\" (UniqueName: \"kubernetes.io/projected/281ee141-2543-4d23-a1d6-cb0d972a05e6-kube-api-access-q4tpf\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.113733 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.113746 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.113757 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/281ee141-2543-4d23-a1d6-cb0d972a05e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.206881 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-svkk9"] Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.216317 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-svkk9"] Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.822191 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" path="/var/lib/kubelet/pods/281ee141-2543-4d23-a1d6-cb0d972a05e6/volumes" Jan 31 07:56:14 crc kubenswrapper[4826]: I0131 07:56:14.870163 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerStarted","Data":"732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d"} Jan 31 07:56:17 crc kubenswrapper[4826]: I0131 07:56:17.905856 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerStarted","Data":"06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa"} Jan 31 07:56:17 crc kubenswrapper[4826]: I0131 07:56:17.906590 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 07:56:17 crc kubenswrapper[4826]: I0131 07:56:17.928331 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.410569487 podStartE2EDuration="8.92831269s" podCreationTimestamp="2026-01-31 07:56:09 +0000 UTC" firstStartedPulling="2026-01-31 07:56:11.047755909 +0000 UTC m=+1202.901642268" lastFinishedPulling="2026-01-31 07:56:17.565499072 +0000 UTC m=+1209.419385471" observedRunningTime="2026-01-31 07:56:17.927455565 +0000 UTC m=+1209.781341924" watchObservedRunningTime="2026-01-31 07:56:17.92831269 +0000 UTC m=+1209.782199059" Jan 31 07:56:18 crc kubenswrapper[4826]: I0131 07:56:18.916918 4826 generic.go:334] "Generic (PLEG): container finished" podID="c688d561-57f0-42dd-9559-ca31e0086d13" containerID="17247f34afb957b4a609780bce8bb7b71620028e3bf0838bc8f96e5b0ae52c94" exitCode=0 Jan 31 07:56:18 crc kubenswrapper[4826]: I0131 07:56:18.916950 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvhwh" event={"ID":"c688d561-57f0-42dd-9559-ca31e0086d13","Type":"ContainerDied","Data":"17247f34afb957b4a609780bce8bb7b71620028e3bf0838bc8f96e5b0ae52c94"} Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.315646 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.440820 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmsc9\" (UniqueName: \"kubernetes.io/projected/c688d561-57f0-42dd-9559-ca31e0086d13-kube-api-access-bmsc9\") pod \"c688d561-57f0-42dd-9559-ca31e0086d13\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.441005 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-scripts\") pod \"c688d561-57f0-42dd-9559-ca31e0086d13\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.441098 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-config-data\") pod \"c688d561-57f0-42dd-9559-ca31e0086d13\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.441141 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-combined-ca-bundle\") pod \"c688d561-57f0-42dd-9559-ca31e0086d13\" (UID: \"c688d561-57f0-42dd-9559-ca31e0086d13\") " Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.446658 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-scripts" (OuterVolumeSpecName: "scripts") pod "c688d561-57f0-42dd-9559-ca31e0086d13" (UID: "c688d561-57f0-42dd-9559-ca31e0086d13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.446822 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c688d561-57f0-42dd-9559-ca31e0086d13-kube-api-access-bmsc9" (OuterVolumeSpecName: "kube-api-access-bmsc9") pod "c688d561-57f0-42dd-9559-ca31e0086d13" (UID: "c688d561-57f0-42dd-9559-ca31e0086d13"). InnerVolumeSpecName "kube-api-access-bmsc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.469605 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-config-data" (OuterVolumeSpecName: "config-data") pod "c688d561-57f0-42dd-9559-ca31e0086d13" (UID: "c688d561-57f0-42dd-9559-ca31e0086d13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.487816 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c688d561-57f0-42dd-9559-ca31e0086d13" (UID: "c688d561-57f0-42dd-9559-ca31e0086d13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.514671 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.514767 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.550277 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.550328 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.550342 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c688d561-57f0-42dd-9559-ca31e0086d13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.550357 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmsc9\" (UniqueName: \"kubernetes.io/projected/c688d561-57f0-42dd-9559-ca31e0086d13-kube-api-access-bmsc9\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.935566 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvhwh" event={"ID":"c688d561-57f0-42dd-9559-ca31e0086d13","Type":"ContainerDied","Data":"6042572169e7d22a24fe74b8372a14a9ce99523e42a1128c70387e6b31ddacd7"} Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.935906 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6042572169e7d22a24fe74b8372a14a9ce99523e42a1128c70387e6b31ddacd7" Jan 31 07:56:20 crc kubenswrapper[4826]: I0131 07:56:20.935611 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvhwh" Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.115160 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.115515 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-log" containerID="cri-o://18b8bd5e1c631fcaafc41311e64600fa51376e728a96a4b138cb48f4f24a936b" gracePeriod=30 Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.115696 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-api" containerID="cri-o://91ed0842cca456b1df8d5365289f22ace51936bd5301af4976fe55d96def6a0c" gracePeriod=30 Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.121643 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": EOF" Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.121648 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": EOF" Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.124074 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.124275 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" containerName="nova-scheduler-scheduler" containerID="cri-o://b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119" gracePeriod=30 Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.186327 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.186558 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-log" containerID="cri-o://0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee" gracePeriod=30 Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.186634 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-metadata" containerID="cri-o://f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b" gracePeriod=30 Jan 31 07:56:21 crc kubenswrapper[4826]: E0131 07:56:21.935510 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:56:21 crc kubenswrapper[4826]: E0131 07:56:21.938845 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:56:21 crc kubenswrapper[4826]: E0131 07:56:21.940351 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 07:56:21 crc kubenswrapper[4826]: E0131 07:56:21.940434 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" containerName="nova-scheduler-scheduler" Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.951252 4826 generic.go:334] "Generic (PLEG): container finished" podID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerID="18b8bd5e1c631fcaafc41311e64600fa51376e728a96a4b138cb48f4f24a936b" exitCode=143 Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.951340 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd43c512-3240-42fe-ab90-6ea4d8c8da60","Type":"ContainerDied","Data":"18b8bd5e1c631fcaafc41311e64600fa51376e728a96a4b138cb48f4f24a936b"} Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.953631 4826 generic.go:334] "Generic (PLEG): container finished" podID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerID="0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee" exitCode=143 Jan 31 07:56:21 crc kubenswrapper[4826]: I0131 07:56:21.953659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3049b66-c738-431f-9c22-ccb9bdd664cd","Type":"ContainerDied","Data":"0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee"} Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.324050 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:58110->10.217.0.182:8775: read: connection reset by peer" Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.324049 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:58122->10.217.0.182:8775: read: connection reset by peer" Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.981707 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.984193 4826 generic.go:334] "Generic (PLEG): container finished" podID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerID="f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b" exitCode=0 Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.984236 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3049b66-c738-431f-9c22-ccb9bdd664cd","Type":"ContainerDied","Data":"f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b"} Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.984269 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3049b66-c738-431f-9c22-ccb9bdd664cd","Type":"ContainerDied","Data":"e6e0e06301daf60b87e8b52955571169bd7f2684e19682aa07b21a8649330c89"} Jan 31 07:56:24 crc kubenswrapper[4826]: I0131 07:56:24.984295 4826 scope.go:117] "RemoveContainer" containerID="f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.009669 4826 scope.go:117] "RemoveContainer" containerID="0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.033415 4826 scope.go:117] "RemoveContainer" containerID="f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b" Jan 31 07:56:25 crc kubenswrapper[4826]: E0131 07:56:25.035523 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b\": container with ID starting with f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b not found: ID does not exist" containerID="f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.035578 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b"} err="failed to get container status \"f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b\": rpc error: code = NotFound desc = could not find container \"f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b\": container with ID starting with f8af4b33c05c58d4aebe8ca495b71c4c1e659fe9d2e0b9159a95897811a18e7b not found: ID does not exist" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.035600 4826 scope.go:117] "RemoveContainer" containerID="0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee" Jan 31 07:56:25 crc kubenswrapper[4826]: E0131 07:56:25.036033 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee\": container with ID starting with 0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee not found: ID does not exist" containerID="0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.036064 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee"} err="failed to get container status \"0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee\": rpc error: code = NotFound desc = could not find container \"0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee\": container with ID starting with 0d68bb4c61dc6df19628fa343d96a027d66c350245dc76935402cdda361cf5ee not found: ID does not exist" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.136938 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-config-data\") pod \"c3049b66-c738-431f-9c22-ccb9bdd664cd\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.137092 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-nova-metadata-tls-certs\") pod \"c3049b66-c738-431f-9c22-ccb9bdd664cd\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.137171 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9bt6\" (UniqueName: \"kubernetes.io/projected/c3049b66-c738-431f-9c22-ccb9bdd664cd-kube-api-access-l9bt6\") pod \"c3049b66-c738-431f-9c22-ccb9bdd664cd\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.137230 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3049b66-c738-431f-9c22-ccb9bdd664cd-logs\") pod \"c3049b66-c738-431f-9c22-ccb9bdd664cd\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.137276 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-combined-ca-bundle\") pod \"c3049b66-c738-431f-9c22-ccb9bdd664cd\" (UID: \"c3049b66-c738-431f-9c22-ccb9bdd664cd\") " Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.137913 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3049b66-c738-431f-9c22-ccb9bdd664cd-logs" (OuterVolumeSpecName: "logs") pod "c3049b66-c738-431f-9c22-ccb9bdd664cd" (UID: "c3049b66-c738-431f-9c22-ccb9bdd664cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.141956 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3049b66-c738-431f-9c22-ccb9bdd664cd-kube-api-access-l9bt6" (OuterVolumeSpecName: "kube-api-access-l9bt6") pod "c3049b66-c738-431f-9c22-ccb9bdd664cd" (UID: "c3049b66-c738-431f-9c22-ccb9bdd664cd"). InnerVolumeSpecName "kube-api-access-l9bt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.164567 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-config-data" (OuterVolumeSpecName: "config-data") pod "c3049b66-c738-431f-9c22-ccb9bdd664cd" (UID: "c3049b66-c738-431f-9c22-ccb9bdd664cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.166105 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3049b66-c738-431f-9c22-ccb9bdd664cd" (UID: "c3049b66-c738-431f-9c22-ccb9bdd664cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.189135 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c3049b66-c738-431f-9c22-ccb9bdd664cd" (UID: "c3049b66-c738-431f-9c22-ccb9bdd664cd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.239880 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.239928 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9bt6\" (UniqueName: \"kubernetes.io/projected/c3049b66-c738-431f-9c22-ccb9bdd664cd-kube-api-access-l9bt6\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.239940 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3049b66-c738-431f-9c22-ccb9bdd664cd-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.239952 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:25 crc kubenswrapper[4826]: I0131 07:56:25.239980 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3049b66-c738-431f-9c22-ccb9bdd664cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.001201 4826 generic.go:334] "Generic (PLEG): container finished" podID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" containerID="b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119" exitCode=0 Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.001299 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0","Type":"ContainerDied","Data":"b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119"} Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.005190 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.041073 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.056011 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.069360 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:56:26 crc kubenswrapper[4826]: E0131 07:56:26.069898 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-log" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.069926 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-log" Jan 31 07:56:26 crc kubenswrapper[4826]: E0131 07:56:26.069990 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerName="dnsmasq-dns" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070003 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerName="dnsmasq-dns" Jan 31 07:56:26 crc kubenswrapper[4826]: E0131 07:56:26.070020 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c688d561-57f0-42dd-9559-ca31e0086d13" containerName="nova-manage" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070031 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c688d561-57f0-42dd-9559-ca31e0086d13" containerName="nova-manage" Jan 31 07:56:26 crc kubenswrapper[4826]: E0131 07:56:26.070062 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerName="init" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070074 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerName="init" Jan 31 07:56:26 crc kubenswrapper[4826]: E0131 07:56:26.070093 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-metadata" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070104 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-metadata" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070360 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-log" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070384 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="281ee141-2543-4d23-a1d6-cb0d972a05e6" containerName="dnsmasq-dns" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070405 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c688d561-57f0-42dd-9559-ca31e0086d13" containerName="nova-manage" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.070418 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" containerName="nova-metadata-metadata" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.071709 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.075902 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.076206 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.076658 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.158726 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aebb981f-3b13-4115-a5b0-1d4942789f7e-logs\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.158791 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c957\" (UniqueName: \"kubernetes.io/projected/aebb981f-3b13-4115-a5b0-1d4942789f7e-kube-api-access-2c957\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.158876 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.158919 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.159002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-config-data\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.260535 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c957\" (UniqueName: \"kubernetes.io/projected/aebb981f-3b13-4115-a5b0-1d4942789f7e-kube-api-access-2c957\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.260670 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.260733 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.260811 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-config-data\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.260840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aebb981f-3b13-4115-a5b0-1d4942789f7e-logs\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.261392 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aebb981f-3b13-4115-a5b0-1d4942789f7e-logs\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.266395 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.270830 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-config-data\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.271449 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebb981f-3b13-4115-a5b0-1d4942789f7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.278101 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c957\" (UniqueName: \"kubernetes.io/projected/aebb981f-3b13-4115-a5b0-1d4942789f7e-kube-api-access-2c957\") pod \"nova-metadata-0\" (UID: \"aebb981f-3b13-4115-a5b0-1d4942789f7e\") " pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.353635 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.404795 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.465111 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-combined-ca-bundle\") pod \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.465181 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-config-data\") pod \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.465263 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv9n9\" (UniqueName: \"kubernetes.io/projected/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-kube-api-access-lv9n9\") pod \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\" (UID: \"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0\") " Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.468640 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-kube-api-access-lv9n9" (OuterVolumeSpecName: "kube-api-access-lv9n9") pod "94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" (UID: "94b63664-e8d7-4f5d-aac0-f00bfadfbfe0"). InnerVolumeSpecName "kube-api-access-lv9n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.511127 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" (UID: "94b63664-e8d7-4f5d-aac0-f00bfadfbfe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.521891 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-config-data" (OuterVolumeSpecName: "config-data") pod "94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" (UID: "94b63664-e8d7-4f5d-aac0-f00bfadfbfe0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.568017 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.568046 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.568058 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv9n9\" (UniqueName: \"kubernetes.io/projected/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0-kube-api-access-lv9n9\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.825660 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3049b66-c738-431f-9c22-ccb9bdd664cd" path="/var/lib/kubelet/pods/c3049b66-c738-431f-9c22-ccb9bdd664cd/volumes" Jan 31 07:56:26 crc kubenswrapper[4826]: I0131 07:56:26.865188 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 07:56:26 crc kubenswrapper[4826]: W0131 07:56:26.881890 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaebb981f_3b13_4115_a5b0_1d4942789f7e.slice/crio-9078cd5d7d807f5359f5d055404af5eb8b23c3ec626edd4c5f24a0e5a952daf8 WatchSource:0}: Error finding container 9078cd5d7d807f5359f5d055404af5eb8b23c3ec626edd4c5f24a0e5a952daf8: Status 404 returned error can't find the container with id 9078cd5d7d807f5359f5d055404af5eb8b23c3ec626edd4c5f24a0e5a952daf8 Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.014990 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"94b63664-e8d7-4f5d-aac0-f00bfadfbfe0","Type":"ContainerDied","Data":"4bcb0c00a7e010ba9f8275d7bdb37b8072b270652a2c7dff5065f35f33e3b005"} Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.015029 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.015038 4826 scope.go:117] "RemoveContainer" containerID="b20cd556801543f18675cc8e6c59d8022dc443ed5ac735cd6cb0cd37af7c5119" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.019098 4826 generic.go:334] "Generic (PLEG): container finished" podID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerID="91ed0842cca456b1df8d5365289f22ace51936bd5301af4976fe55d96def6a0c" exitCode=0 Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.019193 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd43c512-3240-42fe-ab90-6ea4d8c8da60","Type":"ContainerDied","Data":"91ed0842cca456b1df8d5365289f22ace51936bd5301af4976fe55d96def6a0c"} Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.019248 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd43c512-3240-42fe-ab90-6ea4d8c8da60","Type":"ContainerDied","Data":"c7cb4dcaddeb92c224d497cf1142a4c589ddc7b8643970060e99238150fbd5ff"} Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.019265 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7cb4dcaddeb92c224d497cf1142a4c589ddc7b8643970060e99238150fbd5ff" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.020659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aebb981f-3b13-4115-a5b0-1d4942789f7e","Type":"ContainerStarted","Data":"9078cd5d7d807f5359f5d055404af5eb8b23c3ec626edd4c5f24a0e5a952daf8"} Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.034879 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.064946 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.082433 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.092999 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:56:27 crc kubenswrapper[4826]: E0131 07:56:27.093423 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-api" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.093442 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-api" Jan 31 07:56:27 crc kubenswrapper[4826]: E0131 07:56:27.093454 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-log" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.093465 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-log" Jan 31 07:56:27 crc kubenswrapper[4826]: E0131 07:56:27.093476 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" containerName="nova-scheduler-scheduler" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.093483 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" containerName="nova-scheduler-scheduler" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.093651 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-api" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.093665 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" containerName="nova-scheduler-scheduler" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.093677 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" containerName="nova-api-log" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.094351 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.096296 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.118108 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.177654 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-public-tls-certs\") pod \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.177747 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-config-data\") pod \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.177767 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-internal-tls-certs\") pod \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.177814 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zxvf\" (UniqueName: \"kubernetes.io/projected/bd43c512-3240-42fe-ab90-6ea4d8c8da60-kube-api-access-6zxvf\") pod \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.177848 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-combined-ca-bundle\") pod \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.177913 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd43c512-3240-42fe-ab90-6ea4d8c8da60-logs\") pod \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\" (UID: \"bd43c512-3240-42fe-ab90-6ea4d8c8da60\") " Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.178313 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2fp\" (UniqueName: \"kubernetes.io/projected/38bbbb8c-80f6-4950-acea-0d800baa1857-kube-api-access-dv2fp\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.178396 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bbbb8c-80f6-4950-acea-0d800baa1857-config-data\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.178441 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bbbb8c-80f6-4950-acea-0d800baa1857-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.178914 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd43c512-3240-42fe-ab90-6ea4d8c8da60-logs" (OuterVolumeSpecName: "logs") pod "bd43c512-3240-42fe-ab90-6ea4d8c8da60" (UID: "bd43c512-3240-42fe-ab90-6ea4d8c8da60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.183269 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd43c512-3240-42fe-ab90-6ea4d8c8da60-kube-api-access-6zxvf" (OuterVolumeSpecName: "kube-api-access-6zxvf") pod "bd43c512-3240-42fe-ab90-6ea4d8c8da60" (UID: "bd43c512-3240-42fe-ab90-6ea4d8c8da60"). InnerVolumeSpecName "kube-api-access-6zxvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.203183 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd43c512-3240-42fe-ab90-6ea4d8c8da60" (UID: "bd43c512-3240-42fe-ab90-6ea4d8c8da60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.207777 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-config-data" (OuterVolumeSpecName: "config-data") pod "bd43c512-3240-42fe-ab90-6ea4d8c8da60" (UID: "bd43c512-3240-42fe-ab90-6ea4d8c8da60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.233088 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd43c512-3240-42fe-ab90-6ea4d8c8da60" (UID: "bd43c512-3240-42fe-ab90-6ea4d8c8da60"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.233792 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd43c512-3240-42fe-ab90-6ea4d8c8da60" (UID: "bd43c512-3240-42fe-ab90-6ea4d8c8da60"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280132 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2fp\" (UniqueName: \"kubernetes.io/projected/38bbbb8c-80f6-4950-acea-0d800baa1857-kube-api-access-dv2fp\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bbbb8c-80f6-4950-acea-0d800baa1857-config-data\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280318 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bbbb8c-80f6-4950-acea-0d800baa1857-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280418 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280435 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280449 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zxvf\" (UniqueName: \"kubernetes.io/projected/bd43c512-3240-42fe-ab90-6ea4d8c8da60-kube-api-access-6zxvf\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280462 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280473 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd43c512-3240-42fe-ab90-6ea4d8c8da60-logs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.280484 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd43c512-3240-42fe-ab90-6ea4d8c8da60-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.284212 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bbbb8c-80f6-4950-acea-0d800baa1857-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.285226 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bbbb8c-80f6-4950-acea-0d800baa1857-config-data\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.295860 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2fp\" (UniqueName: \"kubernetes.io/projected/38bbbb8c-80f6-4950-acea-0d800baa1857-kube-api-access-dv2fp\") pod \"nova-scheduler-0\" (UID: \"38bbbb8c-80f6-4950-acea-0d800baa1857\") " pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.377619 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.377682 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.417837 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 07:56:27 crc kubenswrapper[4826]: I0131 07:56:27.907756 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.035743 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38bbbb8c-80f6-4950-acea-0d800baa1857","Type":"ContainerStarted","Data":"667e89220126913a59e35d4d1ef459e7bc48dffc239ebcbc7981bb5d5cc5074c"} Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.037631 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aebb981f-3b13-4115-a5b0-1d4942789f7e","Type":"ContainerStarted","Data":"e5955212621efe2d7142968f1fa4faaef440f7683d8c440b15995e7f90877b34"} Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.037657 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aebb981f-3b13-4115-a5b0-1d4942789f7e","Type":"ContainerStarted","Data":"430b059239648ba4a4aaeb3a5b0de31d764f9dbbd9646d49ba61024c0fc004c1"} Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.037670 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.063688 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.063666466 podStartE2EDuration="2.063666466s" podCreationTimestamp="2026-01-31 07:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:28.057079909 +0000 UTC m=+1219.910966268" watchObservedRunningTime="2026-01-31 07:56:28.063666466 +0000 UTC m=+1219.917552825" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.092372 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.100281 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.108298 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.113729 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.122426 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.122688 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.123689 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.156835 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.201824 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-config-data\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.201895 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.201931 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8b1d46-e795-45e4-a7cb-b09687e17027-logs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.202268 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbp7z\" (UniqueName: \"kubernetes.io/projected/2c8b1d46-e795-45e4-a7cb-b09687e17027-kube-api-access-tbp7z\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.202342 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-public-tls-certs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.202432 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.305531 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbp7z\" (UniqueName: \"kubernetes.io/projected/2c8b1d46-e795-45e4-a7cb-b09687e17027-kube-api-access-tbp7z\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.305861 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-public-tls-certs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.305897 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.305960 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-config-data\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.306001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.306026 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8b1d46-e795-45e4-a7cb-b09687e17027-logs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.306462 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8b1d46-e795-45e4-a7cb-b09687e17027-logs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.310819 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-public-tls-certs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.311355 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-config-data\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.314399 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.314455 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8b1d46-e795-45e4-a7cb-b09687e17027-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.322928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbp7z\" (UniqueName: \"kubernetes.io/projected/2c8b1d46-e795-45e4-a7cb-b09687e17027-kube-api-access-tbp7z\") pod \"nova-api-0\" (UID: \"2c8b1d46-e795-45e4-a7cb-b09687e17027\") " pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.436098 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.834040 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b63664-e8d7-4f5d-aac0-f00bfadfbfe0" path="/var/lib/kubelet/pods/94b63664-e8d7-4f5d-aac0-f00bfadfbfe0/volumes" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.835502 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd43c512-3240-42fe-ab90-6ea4d8c8da60" path="/var/lib/kubelet/pods/bd43c512-3240-42fe-ab90-6ea4d8c8da60/volumes" Jan 31 07:56:28 crc kubenswrapper[4826]: I0131 07:56:28.926488 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 07:56:28 crc kubenswrapper[4826]: W0131 07:56:28.931193 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8b1d46_e795_45e4_a7cb_b09687e17027.slice/crio-b53fc975ac6fd7475da2638b3299d87bc3e62aa8c50de28ebf2edc694eaf10de WatchSource:0}: Error finding container b53fc975ac6fd7475da2638b3299d87bc3e62aa8c50de28ebf2edc694eaf10de: Status 404 returned error can't find the container with id b53fc975ac6fd7475da2638b3299d87bc3e62aa8c50de28ebf2edc694eaf10de Jan 31 07:56:29 crc kubenswrapper[4826]: I0131 07:56:29.050280 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c8b1d46-e795-45e4-a7cb-b09687e17027","Type":"ContainerStarted","Data":"b53fc975ac6fd7475da2638b3299d87bc3e62aa8c50de28ebf2edc694eaf10de"} Jan 31 07:56:29 crc kubenswrapper[4826]: I0131 07:56:29.053566 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38bbbb8c-80f6-4950-acea-0d800baa1857","Type":"ContainerStarted","Data":"a4c8a766ddcf41f16397cdee020f7677a8c282cc58aab6fc46971c5576fbc114"} Jan 31 07:56:30 crc kubenswrapper[4826]: I0131 07:56:30.064324 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c8b1d46-e795-45e4-a7cb-b09687e17027","Type":"ContainerStarted","Data":"e8a0ff1b28022f918c62ba5478bf9dd7e8f21586f79dc38fe54be881b763a749"} Jan 31 07:56:30 crc kubenswrapper[4826]: I0131 07:56:30.065019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c8b1d46-e795-45e4-a7cb-b09687e17027","Type":"ContainerStarted","Data":"cfce5cc1e884401352fbf85e4a8ba66fb5b995f1de1fbb1e90d2d9084330a430"} Jan 31 07:56:30 crc kubenswrapper[4826]: I0131 07:56:30.089740 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.089714216 podStartE2EDuration="3.089714216s" podCreationTimestamp="2026-01-31 07:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:29.069103397 +0000 UTC m=+1220.922989756" watchObservedRunningTime="2026-01-31 07:56:30.089714216 +0000 UTC m=+1221.943600585" Jan 31 07:56:30 crc kubenswrapper[4826]: I0131 07:56:30.092961 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.0929345169999998 podStartE2EDuration="2.092934517s" podCreationTimestamp="2026-01-31 07:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:56:30.083050087 +0000 UTC m=+1221.936936446" watchObservedRunningTime="2026-01-31 07:56:30.092934517 +0000 UTC m=+1221.946820896" Jan 31 07:56:31 crc kubenswrapper[4826]: I0131 07:56:31.407719 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:56:31 crc kubenswrapper[4826]: I0131 07:56:31.408201 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 07:56:32 crc kubenswrapper[4826]: I0131 07:56:32.418840 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 07:56:36 crc kubenswrapper[4826]: I0131 07:56:36.405651 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:56:36 crc kubenswrapper[4826]: I0131 07:56:36.406347 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 07:56:37 crc kubenswrapper[4826]: I0131 07:56:37.418714 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 07:56:37 crc kubenswrapper[4826]: I0131 07:56:37.425174 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aebb981f-3b13-4115-a5b0-1d4942789f7e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:56:37 crc kubenswrapper[4826]: I0131 07:56:37.425255 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aebb981f-3b13-4115-a5b0-1d4942789f7e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:56:37 crc kubenswrapper[4826]: I0131 07:56:37.463243 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 07:56:38 crc kubenswrapper[4826]: I0131 07:56:38.181793 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 07:56:38 crc kubenswrapper[4826]: I0131 07:56:38.436599 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:56:38 crc kubenswrapper[4826]: I0131 07:56:38.438014 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 07:56:39 crc kubenswrapper[4826]: I0131 07:56:39.450236 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c8b1d46-e795-45e4-a7cb-b09687e17027" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 07:56:39 crc kubenswrapper[4826]: I0131 07:56:39.450346 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c8b1d46-e795-45e4-a7cb-b09687e17027" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 07:56:40 crc kubenswrapper[4826]: I0131 07:56:40.520741 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 07:56:46 crc kubenswrapper[4826]: I0131 07:56:46.409892 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:56:46 crc kubenswrapper[4826]: I0131 07:56:46.412260 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 07:56:46 crc kubenswrapper[4826]: I0131 07:56:46.416936 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:56:47 crc kubenswrapper[4826]: I0131 07:56:47.244396 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 07:56:48 crc kubenswrapper[4826]: I0131 07:56:48.442607 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:56:48 crc kubenswrapper[4826]: I0131 07:56:48.443473 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 07:56:48 crc kubenswrapper[4826]: I0131 07:56:48.443900 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:56:48 crc kubenswrapper[4826]: I0131 07:56:48.444160 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 07:56:48 crc kubenswrapper[4826]: I0131 07:56:48.452272 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:56:48 crc kubenswrapper[4826]: I0131 07:56:48.452888 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 07:56:56 crc kubenswrapper[4826]: I0131 07:56:56.178897 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:56:57 crc kubenswrapper[4826]: I0131 07:56:57.110735 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:56:57 crc kubenswrapper[4826]: I0131 07:56:57.377423 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:56:57 crc kubenswrapper[4826]: I0131 07:56:57.377476 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:56:57 crc kubenswrapper[4826]: I0131 07:56:57.377526 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:56:57 crc kubenswrapper[4826]: I0131 07:56:57.378657 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"316b3d553b5671d98b8431682183e3a4f3aaa9a7a42b0254a3052bbf98543c03"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:56:57 crc kubenswrapper[4826]: I0131 07:56:57.378735 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://316b3d553b5671d98b8431682183e3a4f3aaa9a7a42b0254a3052bbf98543c03" gracePeriod=600 Jan 31 07:56:58 crc kubenswrapper[4826]: I0131 07:56:58.331798 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="316b3d553b5671d98b8431682183e3a4f3aaa9a7a42b0254a3052bbf98543c03" exitCode=0 Jan 31 07:56:58 crc kubenswrapper[4826]: I0131 07:56:58.331881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"316b3d553b5671d98b8431682183e3a4f3aaa9a7a42b0254a3052bbf98543c03"} Jan 31 07:56:58 crc kubenswrapper[4826]: I0131 07:56:58.332342 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"2996f1f03d04736e2fde3b29cb9be0c885dd95bf9e821597e845b3e75e0d6911"} Jan 31 07:56:58 crc kubenswrapper[4826]: I0131 07:56:58.332388 4826 scope.go:117] "RemoveContainer" containerID="c93a1dd8075ab41380245ff46508ce96eef07210ffe502888ff235cf0e5e7fc4" Jan 31 07:57:00 crc kubenswrapper[4826]: I0131 07:57:00.423919 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="rabbitmq" containerID="cri-o://b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb" gracePeriod=604796 Jan 31 07:57:00 crc kubenswrapper[4826]: I0131 07:57:00.904693 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 31 07:57:01 crc kubenswrapper[4826]: I0131 07:57:01.708924 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bf30cab9-089e-40db-ab76-5416de684a26" containerName="rabbitmq" containerID="cri-o://37896a40d5c6fc30227a53b13920d80f84d208437446aa2396b2ca0879cf2c7c" gracePeriod=604796 Jan 31 07:57:06 crc kubenswrapper[4826]: I0131 07:57:06.992819 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099149 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-confd\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099210 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgr64\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-kube-api-access-tgr64\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099315 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-server-conf\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099358 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099376 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-tls\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099403 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-plugins-conf\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099473 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/548eef53-f0eb-46fd-a66d-12825c7c8f67-pod-info\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099508 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-plugins\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099540 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-erlang-cookie\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099565 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/548eef53-f0eb-46fd-a66d-12825c7c8f67-erlang-cookie-secret\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.099595 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-config-data\") pod \"548eef53-f0eb-46fd-a66d-12825c7c8f67\" (UID: \"548eef53-f0eb-46fd-a66d-12825c7c8f67\") " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.100487 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.100683 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.101060 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.109394 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.109388 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/548eef53-f0eb-46fd-a66d-12825c7c8f67-pod-info" (OuterVolumeSpecName: "pod-info") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.109444 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.123220 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548eef53-f0eb-46fd-a66d-12825c7c8f67-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.123236 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-kube-api-access-tgr64" (OuterVolumeSpecName: "kube-api-access-tgr64") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "kube-api-access-tgr64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.144852 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-config-data" (OuterVolumeSpecName: "config-data") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.177133 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-server-conf" (OuterVolumeSpecName: "server-conf") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.205043 4826 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208163 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208195 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208210 4826 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208220 4826 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/548eef53-f0eb-46fd-a66d-12825c7c8f67-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208232 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208243 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208252 4826 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/548eef53-f0eb-46fd-a66d-12825c7c8f67-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208261 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/548eef53-f0eb-46fd-a66d-12825c7c8f67-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.208270 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgr64\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-kube-api-access-tgr64\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.225810 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.263345 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "548eef53-f0eb-46fd-a66d-12825c7c8f67" (UID: "548eef53-f0eb-46fd-a66d-12825c7c8f67"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.309877 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.309922 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/548eef53-f0eb-46fd-a66d-12825c7c8f67-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.429800 4826 generic.go:334] "Generic (PLEG): container finished" podID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerID="b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb" exitCode=0 Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.429851 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"548eef53-f0eb-46fd-a66d-12825c7c8f67","Type":"ContainerDied","Data":"b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb"} Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.429881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"548eef53-f0eb-46fd-a66d-12825c7c8f67","Type":"ContainerDied","Data":"0df0bda87563d49d99eb4a2ffc3d81007a6134833d30307c57f65c64357f1dbc"} Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.429901 4826 scope.go:117] "RemoveContainer" containerID="b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.430067 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.465117 4826 scope.go:117] "RemoveContainer" containerID="1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.468348 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.483145 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.491878 4826 scope.go:117] "RemoveContainer" containerID="b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb" Jan 31 07:57:07 crc kubenswrapper[4826]: E0131 07:57:07.492503 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb\": container with ID starting with b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb not found: ID does not exist" containerID="b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.492578 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb"} err="failed to get container status \"b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb\": rpc error: code = NotFound desc = could not find container \"b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb\": container with ID starting with b685e612d5808cfe22f14c671ab7b2f5a916a0d11191bcc8b8bbd815fe4b5abb not found: ID does not exist" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.492643 4826 scope.go:117] "RemoveContainer" containerID="1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e" Jan 31 07:57:07 crc kubenswrapper[4826]: E0131 07:57:07.493051 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e\": container with ID starting with 1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e not found: ID does not exist" containerID="1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.493091 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e"} err="failed to get container status \"1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e\": rpc error: code = NotFound desc = could not find container \"1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e\": container with ID starting with 1ca2f763127399ba4ebb27973c09d9963e06163396cf1fc9d28336443058642e not found: ID does not exist" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.505955 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:57:07 crc kubenswrapper[4826]: E0131 07:57:07.506466 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="setup-container" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.506492 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="setup-container" Jan 31 07:57:07 crc kubenswrapper[4826]: E0131 07:57:07.506568 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="rabbitmq" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.506583 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="rabbitmq" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.506954 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" containerName="rabbitmq" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.514902 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.517537 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.517880 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.517894 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.518127 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x5w4g" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.518174 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.518131 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.518401 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.532906 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614300 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614653 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614689 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614733 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db4f48b4-02f0-4f23-a5f8-f024caabed8d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614771 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614789 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614829 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db4f48b4-02f0-4f23-a5f8-f024caabed8d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614846 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5pg\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-kube-api-access-2s5pg\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614866 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614907 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-config-data\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.614939 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.716829 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717160 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717296 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717442 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db4f48b4-02f0-4f23-a5f8-f024caabed8d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717601 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717709 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db4f48b4-02f0-4f23-a5f8-f024caabed8d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717960 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s5pg\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-kube-api-access-2s5pg\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.718103 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.718234 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-config-data\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.718344 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.718534 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.717084 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.718832 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.718985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-config-data\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.719890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db4f48b4-02f0-4f23-a5f8-f024caabed8d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.722537 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.723334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.726773 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.738929 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db4f48b4-02f0-4f23-a5f8-f024caabed8d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.740990 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db4f48b4-02f0-4f23-a5f8-f024caabed8d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.741130 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s5pg\" (UniqueName: \"kubernetes.io/projected/db4f48b4-02f0-4f23-a5f8-f024caabed8d-kube-api-access-2s5pg\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.768457 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"db4f48b4-02f0-4f23-a5f8-f024caabed8d\") " pod="openstack/rabbitmq-server-0" Jan 31 07:57:07 crc kubenswrapper[4826]: I0131 07:57:07.835726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.312930 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.440757 4826 generic.go:334] "Generic (PLEG): container finished" podID="bf30cab9-089e-40db-ab76-5416de684a26" containerID="37896a40d5c6fc30227a53b13920d80f84d208437446aa2396b2ca0879cf2c7c" exitCode=0 Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.440836 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf30cab9-089e-40db-ab76-5416de684a26","Type":"ContainerDied","Data":"37896a40d5c6fc30227a53b13920d80f84d208437446aa2396b2ca0879cf2c7c"} Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.447754 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"db4f48b4-02f0-4f23-a5f8-f024caabed8d","Type":"ContainerStarted","Data":"0108df542682a97557fbfb730e3cd3418dc165553fae9fa5b99a6bd7f7e88604"} Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.720259 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.842900 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdqk\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-kube-api-access-prdqk\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843110 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-server-conf\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843153 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf30cab9-089e-40db-ab76-5416de684a26-erlang-cookie-secret\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843205 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf30cab9-089e-40db-ab76-5416de684a26-pod-info\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843284 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-plugins-conf\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843361 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843392 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-tls\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843438 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-erlang-cookie\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843496 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-config-data\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843537 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-plugins\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.843582 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-confd\") pod \"bf30cab9-089e-40db-ab76-5416de684a26\" (UID: \"bf30cab9-089e-40db-ab76-5416de684a26\") " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.882764 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.883304 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.885884 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-kube-api-access-prdqk" (OuterVolumeSpecName: "kube-api-access-prdqk") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "kube-api-access-prdqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.887622 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.888073 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.898505 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.899161 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548eef53-f0eb-46fd-a66d-12825c7c8f67" path="/var/lib/kubelet/pods/548eef53-f0eb-46fd-a66d-12825c7c8f67/volumes" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.902941 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf30cab9-089e-40db-ab76-5416de684a26-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.903790 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bf30cab9-089e-40db-ab76-5416de684a26-pod-info" (OuterVolumeSpecName: "pod-info") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.923730 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-config-data" (OuterVolumeSpecName: "config-data") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990152 4826 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf30cab9-089e-40db-ab76-5416de684a26-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990188 4826 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990208 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990219 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990230 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990241 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990250 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990260 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdqk\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-kube-api-access-prdqk\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:08 crc kubenswrapper[4826]: I0131 07:57:08.990270 4826 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf30cab9-089e-40db-ab76-5416de684a26-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.026419 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.069834 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-server-conf" (OuterVolumeSpecName: "server-conf") pod "bf30cab9-089e-40db-ab76-5416de684a26" (UID: "bf30cab9-089e-40db-ab76-5416de684a26"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.091402 4826 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf30cab9-089e-40db-ab76-5416de684a26-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.091439 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf30cab9-089e-40db-ab76-5416de684a26-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.165434 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.192796 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.464659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf30cab9-089e-40db-ab76-5416de684a26","Type":"ContainerDied","Data":"98f6b371aa9e7501c91c7f0e6dba0609da167ea5e8b0a94a31a27a1393571482"} Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.464738 4826 scope.go:117] "RemoveContainer" containerID="37896a40d5c6fc30227a53b13920d80f84d208437446aa2396b2ca0879cf2c7c" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.464751 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.494019 4826 scope.go:117] "RemoveContainer" containerID="dcfb3d648c3567e21a9f889b5b49f845d02222028e044f8df7cc9166df845e27" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.508402 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.519221 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.527486 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:57:09 crc kubenswrapper[4826]: E0131 07:57:09.527832 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf30cab9-089e-40db-ab76-5416de684a26" containerName="setup-container" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.527848 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf30cab9-089e-40db-ab76-5416de684a26" containerName="setup-container" Jan 31 07:57:09 crc kubenswrapper[4826]: E0131 07:57:09.527866 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf30cab9-089e-40db-ab76-5416de684a26" containerName="rabbitmq" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.527872 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf30cab9-089e-40db-ab76-5416de684a26" containerName="rabbitmq" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.528044 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf30cab9-089e-40db-ab76-5416de684a26" containerName="rabbitmq" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.529082 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.537709 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.538022 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.538161 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.538318 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.539064 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4xrbw" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.539160 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.542319 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.546607 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.600887 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.600929 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjbxr\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-kube-api-access-rjbxr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601064 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ce80f95-b8c4-499e-84c4-aceea6e628fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601126 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601154 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601334 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601384 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601527 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601558 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.601583 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ce80f95-b8c4-499e-84c4-aceea6e628fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703082 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703109 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ce80f95-b8c4-499e-84c4-aceea6e628fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703165 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703188 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbxr\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-kube-api-access-rjbxr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703243 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703260 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ce80f95-b8c4-499e-84c4-aceea6e628fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703286 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703318 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703371 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.703397 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.704556 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.705537 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.706211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.706733 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.706949 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.707150 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ce80f95-b8c4-499e-84c4-aceea6e628fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.712109 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ce80f95-b8c4-499e-84c4-aceea6e628fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.712369 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.714050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.714512 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ce80f95-b8c4-499e-84c4-aceea6e628fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.725428 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbxr\" (UniqueName: \"kubernetes.io/projected/0ce80f95-b8c4-499e-84c4-aceea6e628fd-kube-api-access-rjbxr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.743177 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ce80f95-b8c4-499e-84c4-aceea6e628fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:09 crc kubenswrapper[4826]: I0131 07:57:09.854469 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.334231 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 07:57:10 crc kubenswrapper[4826]: W0131 07:57:10.337860 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce80f95_b8c4_499e_84c4_aceea6e628fd.slice/crio-060a8060b9230c39ae80c82730296e9ca46a0ff4b13964c9b2ce0fb20de3d5db WatchSource:0}: Error finding container 060a8060b9230c39ae80c82730296e9ca46a0ff4b13964c9b2ce0fb20de3d5db: Status 404 returned error can't find the container with id 060a8060b9230c39ae80c82730296e9ca46a0ff4b13964c9b2ce0fb20de3d5db Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.475306 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"db4f48b4-02f0-4f23-a5f8-f024caabed8d","Type":"ContainerStarted","Data":"b07725a8c7289891fd6842b693a65771efef8148013bf5a59c185d1dc00d0194"} Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.476640 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ce80f95-b8c4-499e-84c4-aceea6e628fd","Type":"ContainerStarted","Data":"060a8060b9230c39ae80c82730296e9ca46a0ff4b13964c9b2ce0fb20de3d5db"} Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.519576 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-f8dsx"] Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.522316 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.531498 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.569246 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-f8dsx"] Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.620469 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.620597 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.620688 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xq2w\" (UniqueName: \"kubernetes.io/projected/a265bb54-30b9-4be6-9d3e-f9d234de785e-kube-api-access-9xq2w\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.620755 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.620793 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-config\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.620816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.722201 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.722288 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xq2w\" (UniqueName: \"kubernetes.io/projected/a265bb54-30b9-4be6-9d3e-f9d234de785e-kube-api-access-9xq2w\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.722335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.722362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-config\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.722378 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.722425 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.723300 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.724042 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.727616 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-config\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.734641 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.737635 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.757171 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xq2w\" (UniqueName: \"kubernetes.io/projected/a265bb54-30b9-4be6-9d3e-f9d234de785e-kube-api-access-9xq2w\") pod \"dnsmasq-dns-578b8d767c-f8dsx\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.817916 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf30cab9-089e-40db-ab76-5416de684a26" path="/var/lib/kubelet/pods/bf30cab9-089e-40db-ab76-5416de684a26/volumes" Jan 31 07:57:10 crc kubenswrapper[4826]: I0131 07:57:10.885310 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:11 crc kubenswrapper[4826]: I0131 07:57:11.150037 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-f8dsx"] Jan 31 07:57:11 crc kubenswrapper[4826]: W0131 07:57:11.154654 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda265bb54_30b9_4be6_9d3e_f9d234de785e.slice/crio-7518693b9aa6337e12f840fe909ccbfed3e23930f67164a6075db9876c70f9c8 WatchSource:0}: Error finding container 7518693b9aa6337e12f840fe909ccbfed3e23930f67164a6075db9876c70f9c8: Status 404 returned error can't find the container with id 7518693b9aa6337e12f840fe909ccbfed3e23930f67164a6075db9876c70f9c8 Jan 31 07:57:11 crc kubenswrapper[4826]: I0131 07:57:11.485622 4826 generic.go:334] "Generic (PLEG): container finished" podID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerID="6d5a4789db4d1b6f55563facc1b0dd690366a8d2d541386b02912fdd32259dd6" exitCode=0 Jan 31 07:57:11 crc kubenswrapper[4826]: I0131 07:57:11.486616 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" event={"ID":"a265bb54-30b9-4be6-9d3e-f9d234de785e","Type":"ContainerDied","Data":"6d5a4789db4d1b6f55563facc1b0dd690366a8d2d541386b02912fdd32259dd6"} Jan 31 07:57:11 crc kubenswrapper[4826]: I0131 07:57:11.486664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" event={"ID":"a265bb54-30b9-4be6-9d3e-f9d234de785e","Type":"ContainerStarted","Data":"7518693b9aa6337e12f840fe909ccbfed3e23930f67164a6075db9876c70f9c8"} Jan 31 07:57:12 crc kubenswrapper[4826]: I0131 07:57:12.496147 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" event={"ID":"a265bb54-30b9-4be6-9d3e-f9d234de785e","Type":"ContainerStarted","Data":"0049a27df50c3236ed2b13a7d291dc44a4a27ff05ea9a6a46dc9a80228b69700"} Jan 31 07:57:12 crc kubenswrapper[4826]: I0131 07:57:12.496452 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:12 crc kubenswrapper[4826]: I0131 07:57:12.499463 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ce80f95-b8c4-499e-84c4-aceea6e628fd","Type":"ContainerStarted","Data":"504c2a24dde5b046a1d8f972878c4a1d7da08c8b0573e9319b1e8284365d4b3c"} Jan 31 07:57:12 crc kubenswrapper[4826]: I0131 07:57:12.525839 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" podStartSLOduration=2.525812548 podStartE2EDuration="2.525812548s" podCreationTimestamp="2026-01-31 07:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:57:12.518216323 +0000 UTC m=+1264.372102682" watchObservedRunningTime="2026-01-31 07:57:12.525812548 +0000 UTC m=+1264.379698957" Jan 31 07:57:20 crc kubenswrapper[4826]: I0131 07:57:20.887291 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:20 crc kubenswrapper[4826]: I0131 07:57:20.953079 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-72zkq"] Jan 31 07:57:20 crc kubenswrapper[4826]: I0131 07:57:20.953324 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerName="dnsmasq-dns" containerID="cri-o://17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3" gracePeriod=10 Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.154783 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-gx74t"] Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.156575 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.170375 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-gx74t"] Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.347300 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-config\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.347646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.347756 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.347788 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.347809 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7xc\" (UniqueName: \"kubernetes.io/projected/a5c6247d-41c2-41c0-9cf4-17098b60970a-kube-api-access-ht7xc\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.348041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.450246 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.450317 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.450347 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7xc\" (UniqueName: \"kubernetes.io/projected/a5c6247d-41c2-41c0-9cf4-17098b60970a-kube-api-access-ht7xc\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.450456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.450503 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-config\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.450536 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.451165 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.451398 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.452051 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.452276 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.452695 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-config\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.470162 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7xc\" (UniqueName: \"kubernetes.io/projected/a5c6247d-41c2-41c0-9cf4-17098b60970a-kube-api-access-ht7xc\") pod \"dnsmasq-dns-fbc59fbb7-gx74t\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.486343 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.596503 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.611902 4826 generic.go:334] "Generic (PLEG): container finished" podID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerID="17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3" exitCode=0 Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.611943 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" event={"ID":"4b7f36bf-3ed8-4a40-9306-541c026c91dd","Type":"ContainerDied","Data":"17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3"} Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.611992 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" event={"ID":"4b7f36bf-3ed8-4a40-9306-541c026c91dd","Type":"ContainerDied","Data":"391f98e9e22a56dd18681b126d7b2c12807d1857aa3b229688cbeaf0ccdfd8f7"} Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.612012 4826 scope.go:117] "RemoveContainer" containerID="17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.612134 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-72zkq" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.646097 4826 scope.go:117] "RemoveContainer" containerID="25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.675547 4826 scope.go:117] "RemoveContainer" containerID="17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3" Jan 31 07:57:21 crc kubenswrapper[4826]: E0131 07:57:21.676234 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3\": container with ID starting with 17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3 not found: ID does not exist" containerID="17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.676273 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3"} err="failed to get container status \"17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3\": rpc error: code = NotFound desc = could not find container \"17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3\": container with ID starting with 17c9f00d765bc58135579b7237ca9a68b3d3939a1fb6b4399253f982dbb440f3 not found: ID does not exist" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.676300 4826 scope.go:117] "RemoveContainer" containerID="25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5" Jan 31 07:57:21 crc kubenswrapper[4826]: E0131 07:57:21.676810 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5\": container with ID starting with 25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5 not found: ID does not exist" containerID="25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.676834 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5"} err="failed to get container status \"25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5\": rpc error: code = NotFound desc = could not find container \"25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5\": container with ID starting with 25ab47c445e7ef0c5c3e43b754454dae5cbbca423f750be00801feae5dcc06c5 not found: ID does not exist" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.762760 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-dns-svc\") pod \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.762868 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-sb\") pod \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.762897 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-nb\") pod \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.763103 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j695h\" (UniqueName: \"kubernetes.io/projected/4b7f36bf-3ed8-4a40-9306-541c026c91dd-kube-api-access-j695h\") pod \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.763145 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-config\") pod \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\" (UID: \"4b7f36bf-3ed8-4a40-9306-541c026c91dd\") " Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.787736 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7f36bf-3ed8-4a40-9306-541c026c91dd-kube-api-access-j695h" (OuterVolumeSpecName: "kube-api-access-j695h") pod "4b7f36bf-3ed8-4a40-9306-541c026c91dd" (UID: "4b7f36bf-3ed8-4a40-9306-541c026c91dd"). InnerVolumeSpecName "kube-api-access-j695h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.815662 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b7f36bf-3ed8-4a40-9306-541c026c91dd" (UID: "4b7f36bf-3ed8-4a40-9306-541c026c91dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.828351 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b7f36bf-3ed8-4a40-9306-541c026c91dd" (UID: "4b7f36bf-3ed8-4a40-9306-541c026c91dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.832373 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-config" (OuterVolumeSpecName: "config") pod "4b7f36bf-3ed8-4a40-9306-541c026c91dd" (UID: "4b7f36bf-3ed8-4a40-9306-541c026c91dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.832724 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b7f36bf-3ed8-4a40-9306-541c026c91dd" (UID: "4b7f36bf-3ed8-4a40-9306-541c026c91dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.865701 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j695h\" (UniqueName: \"kubernetes.io/projected/4b7f36bf-3ed8-4a40-9306-541c026c91dd-kube-api-access-j695h\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.865734 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.865747 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.865758 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.865767 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b7f36bf-3ed8-4a40-9306-541c026c91dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.944978 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-72zkq"] Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.954058 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-72zkq"] Jan 31 07:57:21 crc kubenswrapper[4826]: W0131 07:57:21.959643 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5c6247d_41c2_41c0_9cf4_17098b60970a.slice/crio-11ff34db31af395cb78fc920aa9587f4376969fa2d87776cfa789e9908afc2eb WatchSource:0}: Error finding container 11ff34db31af395cb78fc920aa9587f4376969fa2d87776cfa789e9908afc2eb: Status 404 returned error can't find the container with id 11ff34db31af395cb78fc920aa9587f4376969fa2d87776cfa789e9908afc2eb Jan 31 07:57:21 crc kubenswrapper[4826]: I0131 07:57:21.963245 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-gx74t"] Jan 31 07:57:22 crc kubenswrapper[4826]: I0131 07:57:22.621612 4826 generic.go:334] "Generic (PLEG): container finished" podID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerID="f0420167db84f8f5dcd0d44e64e7bde51528e638bd33e15a0540f081c0e4d385" exitCode=0 Jan 31 07:57:22 crc kubenswrapper[4826]: I0131 07:57:22.621692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" event={"ID":"a5c6247d-41c2-41c0-9cf4-17098b60970a","Type":"ContainerDied","Data":"f0420167db84f8f5dcd0d44e64e7bde51528e638bd33e15a0540f081c0e4d385"} Jan 31 07:57:22 crc kubenswrapper[4826]: I0131 07:57:22.622018 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" event={"ID":"a5c6247d-41c2-41c0-9cf4-17098b60970a","Type":"ContainerStarted","Data":"11ff34db31af395cb78fc920aa9587f4376969fa2d87776cfa789e9908afc2eb"} Jan 31 07:57:22 crc kubenswrapper[4826]: I0131 07:57:22.846531 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" path="/var/lib/kubelet/pods/4b7f36bf-3ed8-4a40-9306-541c026c91dd/volumes" Jan 31 07:57:23 crc kubenswrapper[4826]: I0131 07:57:23.633565 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" event={"ID":"a5c6247d-41c2-41c0-9cf4-17098b60970a","Type":"ContainerStarted","Data":"a6499043cdb97e37d2148548f8ff6b35d301c622a028496dbb35f78191041d8a"} Jan 31 07:57:23 crc kubenswrapper[4826]: I0131 07:57:23.633895 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:23 crc kubenswrapper[4826]: I0131 07:57:23.653603 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" podStartSLOduration=2.653584292 podStartE2EDuration="2.653584292s" podCreationTimestamp="2026-01-31 07:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:57:23.649585959 +0000 UTC m=+1275.503472328" watchObservedRunningTime="2026-01-31 07:57:23.653584292 +0000 UTC m=+1275.507470651" Jan 31 07:57:31 crc kubenswrapper[4826]: I0131 07:57:31.489150 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 07:57:31 crc kubenswrapper[4826]: I0131 07:57:31.547694 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-f8dsx"] Jan 31 07:57:31 crc kubenswrapper[4826]: I0131 07:57:31.547919 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerName="dnsmasq-dns" containerID="cri-o://0049a27df50c3236ed2b13a7d291dc44a4a27ff05ea9a6a46dc9a80228b69700" gracePeriod=10 Jan 31 07:57:31 crc kubenswrapper[4826]: I0131 07:57:31.731691 4826 generic.go:334] "Generic (PLEG): container finished" podID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerID="0049a27df50c3236ed2b13a7d291dc44a4a27ff05ea9a6a46dc9a80228b69700" exitCode=0 Jan 31 07:57:31 crc kubenswrapper[4826]: I0131 07:57:31.731802 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" event={"ID":"a265bb54-30b9-4be6-9d3e-f9d234de785e","Type":"ContainerDied","Data":"0049a27df50c3236ed2b13a7d291dc44a4a27ff05ea9a6a46dc9a80228b69700"} Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.161750 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.219728 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-openstack-edpm-ipam\") pod \"a265bb54-30b9-4be6-9d3e-f9d234de785e\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.220169 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-config\") pod \"a265bb54-30b9-4be6-9d3e-f9d234de785e\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.220214 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-nb\") pod \"a265bb54-30b9-4be6-9d3e-f9d234de785e\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.265950 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-config" (OuterVolumeSpecName: "config") pod "a265bb54-30b9-4be6-9d3e-f9d234de785e" (UID: "a265bb54-30b9-4be6-9d3e-f9d234de785e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.285082 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a265bb54-30b9-4be6-9d3e-f9d234de785e" (UID: "a265bb54-30b9-4be6-9d3e-f9d234de785e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.286336 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a265bb54-30b9-4be6-9d3e-f9d234de785e" (UID: "a265bb54-30b9-4be6-9d3e-f9d234de785e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.321279 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-sb\") pod \"a265bb54-30b9-4be6-9d3e-f9d234de785e\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.321335 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xq2w\" (UniqueName: \"kubernetes.io/projected/a265bb54-30b9-4be6-9d3e-f9d234de785e-kube-api-access-9xq2w\") pod \"a265bb54-30b9-4be6-9d3e-f9d234de785e\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.321366 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-dns-svc\") pod \"a265bb54-30b9-4be6-9d3e-f9d234de785e\" (UID: \"a265bb54-30b9-4be6-9d3e-f9d234de785e\") " Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.321805 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.321829 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-config\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.321842 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.325631 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a265bb54-30b9-4be6-9d3e-f9d234de785e-kube-api-access-9xq2w" (OuterVolumeSpecName: "kube-api-access-9xq2w") pod "a265bb54-30b9-4be6-9d3e-f9d234de785e" (UID: "a265bb54-30b9-4be6-9d3e-f9d234de785e"). InnerVolumeSpecName "kube-api-access-9xq2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.364727 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a265bb54-30b9-4be6-9d3e-f9d234de785e" (UID: "a265bb54-30b9-4be6-9d3e-f9d234de785e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.372122 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a265bb54-30b9-4be6-9d3e-f9d234de785e" (UID: "a265bb54-30b9-4be6-9d3e-f9d234de785e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.422580 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.422618 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xq2w\" (UniqueName: \"kubernetes.io/projected/a265bb54-30b9-4be6-9d3e-f9d234de785e-kube-api-access-9xq2w\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.422633 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a265bb54-30b9-4be6-9d3e-f9d234de785e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.744824 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" event={"ID":"a265bb54-30b9-4be6-9d3e-f9d234de785e","Type":"ContainerDied","Data":"7518693b9aa6337e12f840fe909ccbfed3e23930f67164a6075db9876c70f9c8"} Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.744886 4826 scope.go:117] "RemoveContainer" containerID="0049a27df50c3236ed2b13a7d291dc44a4a27ff05ea9a6a46dc9a80228b69700" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.745061 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-f8dsx" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.798844 4826 scope.go:117] "RemoveContainer" containerID="6d5a4789db4d1b6f55563facc1b0dd690366a8d2d541386b02912fdd32259dd6" Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.798937 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-f8dsx"] Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.806806 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-f8dsx"] Jan 31 07:57:32 crc kubenswrapper[4826]: I0131 07:57:32.821487 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" path="/var/lib/kubelet/pods/a265bb54-30b9-4be6-9d3e-f9d234de785e/volumes" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.432130 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t"] Jan 31 07:57:37 crc kubenswrapper[4826]: E0131 07:57:37.434529 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerName="dnsmasq-dns" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.434754 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerName="dnsmasq-dns" Jan 31 07:57:37 crc kubenswrapper[4826]: E0131 07:57:37.434785 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerName="dnsmasq-dns" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.434798 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerName="dnsmasq-dns" Jan 31 07:57:37 crc kubenswrapper[4826]: E0131 07:57:37.434851 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerName="init" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.434865 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerName="init" Jan 31 07:57:37 crc kubenswrapper[4826]: E0131 07:57:37.434886 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerName="init" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.434900 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerName="init" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.435250 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7f36bf-3ed8-4a40-9306-541c026c91dd" containerName="dnsmasq-dns" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.435294 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a265bb54-30b9-4be6-9d3e-f9d234de785e" containerName="dnsmasq-dns" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.436414 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.439950 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.439951 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.441555 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.441631 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.453371 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t"] Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.626727 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.626823 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.626866 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.626905 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4b8\" (UniqueName: \"kubernetes.io/projected/e23a5f6a-bae5-4579-a340-9a45b0706ba5-kube-api-access-wt4b8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.728559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.728623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.728655 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.728688 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt4b8\" (UniqueName: \"kubernetes.io/projected/e23a5f6a-bae5-4579-a340-9a45b0706ba5-kube-api-access-wt4b8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.735032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.735953 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.739072 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.757007 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt4b8\" (UniqueName: \"kubernetes.io/projected/e23a5f6a-bae5-4579-a340-9a45b0706ba5-kube-api-access-wt4b8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:37 crc kubenswrapper[4826]: I0131 07:57:37.767951 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:57:38 crc kubenswrapper[4826]: I0131 07:57:38.308326 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t"] Jan 31 07:57:38 crc kubenswrapper[4826]: I0131 07:57:38.827194 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" event={"ID":"e23a5f6a-bae5-4579-a340-9a45b0706ba5","Type":"ContainerStarted","Data":"278996383af2259fa587e5c9548af65e290390ab748ecaa8475b26cfc0151b7b"} Jan 31 07:57:41 crc kubenswrapper[4826]: I0131 07:57:41.848730 4826 generic.go:334] "Generic (PLEG): container finished" podID="db4f48b4-02f0-4f23-a5f8-f024caabed8d" containerID="b07725a8c7289891fd6842b693a65771efef8148013bf5a59c185d1dc00d0194" exitCode=0 Jan 31 07:57:41 crc kubenswrapper[4826]: I0131 07:57:41.848898 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"db4f48b4-02f0-4f23-a5f8-f024caabed8d","Type":"ContainerDied","Data":"b07725a8c7289891fd6842b693a65771efef8148013bf5a59c185d1dc00d0194"} Jan 31 07:57:42 crc kubenswrapper[4826]: I0131 07:57:42.864455 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"db4f48b4-02f0-4f23-a5f8-f024caabed8d","Type":"ContainerStarted","Data":"e8149affe010077424d17423735f7435f17989a9b6e85a8d84b92268a5e72a69"} Jan 31 07:57:42 crc kubenswrapper[4826]: I0131 07:57:42.865044 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 07:57:42 crc kubenswrapper[4826]: I0131 07:57:42.890701 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.890684417 podStartE2EDuration="35.890684417s" podCreationTimestamp="2026-01-31 07:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:57:42.89009395 +0000 UTC m=+1294.743980309" watchObservedRunningTime="2026-01-31 07:57:42.890684417 +0000 UTC m=+1294.744570776" Jan 31 07:57:44 crc kubenswrapper[4826]: I0131 07:57:44.884485 4826 generic.go:334] "Generic (PLEG): container finished" podID="0ce80f95-b8c4-499e-84c4-aceea6e628fd" containerID="504c2a24dde5b046a1d8f972878c4a1d7da08c8b0573e9319b1e8284365d4b3c" exitCode=0 Jan 31 07:57:44 crc kubenswrapper[4826]: I0131 07:57:44.884539 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ce80f95-b8c4-499e-84c4-aceea6e628fd","Type":"ContainerDied","Data":"504c2a24dde5b046a1d8f972878c4a1d7da08c8b0573e9319b1e8284365d4b3c"} Jan 31 07:57:49 crc kubenswrapper[4826]: I0131 07:57:49.933671 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ce80f95-b8c4-499e-84c4-aceea6e628fd","Type":"ContainerStarted","Data":"087de9727e63748210571971b93461faef9199c28dd3b88164350e8555abaad1"} Jan 31 07:57:49 crc kubenswrapper[4826]: I0131 07:57:49.934425 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:57:49 crc kubenswrapper[4826]: I0131 07:57:49.958856 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.958837952 podStartE2EDuration="40.958837952s" podCreationTimestamp="2026-01-31 07:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 07:57:49.956948438 +0000 UTC m=+1301.810834817" watchObservedRunningTime="2026-01-31 07:57:49.958837952 +0000 UTC m=+1301.812724321" Jan 31 07:57:50 crc kubenswrapper[4826]: I0131 07:57:50.942470 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" event={"ID":"e23a5f6a-bae5-4579-a340-9a45b0706ba5","Type":"ContainerStarted","Data":"8628e9b3110df6637b6985bd41ad3ce56819623b33b6b19ccb2e80a0c881a17d"} Jan 31 07:57:50 crc kubenswrapper[4826]: I0131 07:57:50.968257 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" podStartSLOduration=2.524494643 podStartE2EDuration="13.968236566s" podCreationTimestamp="2026-01-31 07:57:37 +0000 UTC" firstStartedPulling="2026-01-31 07:57:38.321097225 +0000 UTC m=+1290.174983584" lastFinishedPulling="2026-01-31 07:57:49.764839148 +0000 UTC m=+1301.618725507" observedRunningTime="2026-01-31 07:57:50.961816284 +0000 UTC m=+1302.815702643" watchObservedRunningTime="2026-01-31 07:57:50.968236566 +0000 UTC m=+1302.822122935" Jan 31 07:57:57 crc kubenswrapper[4826]: I0131 07:57:57.840544 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 07:57:59 crc kubenswrapper[4826]: I0131 07:57:59.860217 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 07:58:06 crc kubenswrapper[4826]: I0131 07:58:06.098553 4826 generic.go:334] "Generic (PLEG): container finished" podID="e23a5f6a-bae5-4579-a340-9a45b0706ba5" containerID="8628e9b3110df6637b6985bd41ad3ce56819623b33b6b19ccb2e80a0c881a17d" exitCode=0 Jan 31 07:58:06 crc kubenswrapper[4826]: I0131 07:58:06.098660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" event={"ID":"e23a5f6a-bae5-4579-a340-9a45b0706ba5","Type":"ContainerDied","Data":"8628e9b3110df6637b6985bd41ad3ce56819623b33b6b19ccb2e80a0c881a17d"} Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.543563 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.610570 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-inventory\") pod \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.610693 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt4b8\" (UniqueName: \"kubernetes.io/projected/e23a5f6a-bae5-4579-a340-9a45b0706ba5-kube-api-access-wt4b8\") pod \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.610842 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-ssh-key-openstack-edpm-ipam\") pod \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.611103 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-repo-setup-combined-ca-bundle\") pod \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\" (UID: \"e23a5f6a-bae5-4579-a340-9a45b0706ba5\") " Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.617028 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23a5f6a-bae5-4579-a340-9a45b0706ba5-kube-api-access-wt4b8" (OuterVolumeSpecName: "kube-api-access-wt4b8") pod "e23a5f6a-bae5-4579-a340-9a45b0706ba5" (UID: "e23a5f6a-bae5-4579-a340-9a45b0706ba5"). InnerVolumeSpecName "kube-api-access-wt4b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.617978 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e23a5f6a-bae5-4579-a340-9a45b0706ba5" (UID: "e23a5f6a-bae5-4579-a340-9a45b0706ba5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.637023 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-inventory" (OuterVolumeSpecName: "inventory") pod "e23a5f6a-bae5-4579-a340-9a45b0706ba5" (UID: "e23a5f6a-bae5-4579-a340-9a45b0706ba5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.645928 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e23a5f6a-bae5-4579-a340-9a45b0706ba5" (UID: "e23a5f6a-bae5-4579-a340-9a45b0706ba5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.713491 4826 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.713530 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.713542 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt4b8\" (UniqueName: \"kubernetes.io/projected/e23a5f6a-bae5-4579-a340-9a45b0706ba5-kube-api-access-wt4b8\") on node \"crc\" DevicePath \"\"" Jan 31 07:58:07 crc kubenswrapper[4826]: I0131 07:58:07.713551 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23a5f6a-bae5-4579-a340-9a45b0706ba5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.122653 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" event={"ID":"e23a5f6a-bae5-4579-a340-9a45b0706ba5","Type":"ContainerDied","Data":"278996383af2259fa587e5c9548af65e290390ab748ecaa8475b26cfc0151b7b"} Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.122691 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278996383af2259fa587e5c9548af65e290390ab748ecaa8475b26cfc0151b7b" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.122849 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.205384 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk"] Jan 31 07:58:08 crc kubenswrapper[4826]: E0131 07:58:08.206084 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23a5f6a-bae5-4579-a340-9a45b0706ba5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.206103 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23a5f6a-bae5-4579-a340-9a45b0706ba5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.206287 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23a5f6a-bae5-4579-a340-9a45b0706ba5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.206881 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.208821 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.209139 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.209896 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.209912 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.221462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.221513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j62q\" (UniqueName: \"kubernetes.io/projected/a84a288f-097a-4f5b-acee-09d5c7d34abf-kube-api-access-5j62q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.222122 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.222566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.223086 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk"] Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.323621 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.323708 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.323763 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j62q\" (UniqueName: \"kubernetes.io/projected/a84a288f-097a-4f5b-acee-09d5c7d34abf-kube-api-access-5j62q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.324509 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.327090 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.329583 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.336700 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.464254 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j62q\" (UniqueName: \"kubernetes.io/projected/a84a288f-097a-4f5b-acee-09d5c7d34abf-kube-api-access-5j62q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-77snk\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:08 crc kubenswrapper[4826]: I0131 07:58:08.566617 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 07:58:09 crc kubenswrapper[4826]: I0131 07:58:09.097304 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk"] Jan 31 07:58:09 crc kubenswrapper[4826]: W0131 07:58:09.101834 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda84a288f_097a_4f5b_acee_09d5c7d34abf.slice/crio-8f209908f4cc18dca8a0bb352408c3a5fae6b07c194fece59cd4a39fd29e0cde WatchSource:0}: Error finding container 8f209908f4cc18dca8a0bb352408c3a5fae6b07c194fece59cd4a39fd29e0cde: Status 404 returned error can't find the container with id 8f209908f4cc18dca8a0bb352408c3a5fae6b07c194fece59cd4a39fd29e0cde Jan 31 07:58:09 crc kubenswrapper[4826]: I0131 07:58:09.133119 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" event={"ID":"a84a288f-097a-4f5b-acee-09d5c7d34abf","Type":"ContainerStarted","Data":"8f209908f4cc18dca8a0bb352408c3a5fae6b07c194fece59cd4a39fd29e0cde"} Jan 31 07:58:09 crc kubenswrapper[4826]: I0131 07:58:09.570447 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 07:58:10 crc kubenswrapper[4826]: I0131 07:58:10.144994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" event={"ID":"a84a288f-097a-4f5b-acee-09d5c7d34abf","Type":"ContainerStarted","Data":"a36090a5d834978450624d390a7ccf2a4cea44d603c5861188f9cd0542202b6f"} Jan 31 07:58:10 crc kubenswrapper[4826]: I0131 07:58:10.170274 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" podStartSLOduration=1.706449402 podStartE2EDuration="2.170255326s" podCreationTimestamp="2026-01-31 07:58:08 +0000 UTC" firstStartedPulling="2026-01-31 07:58:09.104651631 +0000 UTC m=+1320.958538000" lastFinishedPulling="2026-01-31 07:58:09.568457565 +0000 UTC m=+1321.422343924" observedRunningTime="2026-01-31 07:58:10.164680888 +0000 UTC m=+1322.018567247" watchObservedRunningTime="2026-01-31 07:58:10.170255326 +0000 UTC m=+1322.024141685" Jan 31 07:58:29 crc kubenswrapper[4826]: I0131 07:58:29.360483 4826 scope.go:117] "RemoveContainer" containerID="340b0443ff4b5d664881fb638782644327f93c51eb77874db76272f9e7886b9f" Jan 31 07:58:29 crc kubenswrapper[4826]: I0131 07:58:29.399414 4826 scope.go:117] "RemoveContainer" containerID="50b3a5f2a093b9eff97b8b9c850b0d539cc6e809d924ecc0d37767416e36d7fd" Jan 31 07:58:57 crc kubenswrapper[4826]: I0131 07:58:57.376731 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:58:57 crc kubenswrapper[4826]: I0131 07:58:57.377327 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:59:27 crc kubenswrapper[4826]: I0131 07:59:27.377521 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:59:27 crc kubenswrapper[4826]: I0131 07:59:27.378089 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:59:29 crc kubenswrapper[4826]: I0131 07:59:29.510244 4826 scope.go:117] "RemoveContainer" containerID="a802aac58031d43e1c9314c1a3f7d5d12fce71b3509027cef511a6a34e13c71b" Jan 31 07:59:29 crc kubenswrapper[4826]: I0131 07:59:29.556770 4826 scope.go:117] "RemoveContainer" containerID="8fd6cad89ca037250a0de698befb6ce82813ac56da3a240b5b01e27ef26f777e" Jan 31 07:59:57 crc kubenswrapper[4826]: I0131 07:59:57.377773 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 07:59:57 crc kubenswrapper[4826]: I0131 07:59:57.378476 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 07:59:57 crc kubenswrapper[4826]: I0131 07:59:57.378543 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 07:59:57 crc kubenswrapper[4826]: I0131 07:59:57.379737 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2996f1f03d04736e2fde3b29cb9be0c885dd95bf9e821597e845b3e75e0d6911"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 07:59:57 crc kubenswrapper[4826]: I0131 07:59:57.379824 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://2996f1f03d04736e2fde3b29cb9be0c885dd95bf9e821597e845b3e75e0d6911" gracePeriod=600 Jan 31 07:59:58 crc kubenswrapper[4826]: I0131 07:59:58.266745 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="2996f1f03d04736e2fde3b29cb9be0c885dd95bf9e821597e845b3e75e0d6911" exitCode=0 Jan 31 07:59:58 crc kubenswrapper[4826]: I0131 07:59:58.266809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"2996f1f03d04736e2fde3b29cb9be0c885dd95bf9e821597e845b3e75e0d6911"} Jan 31 07:59:58 crc kubenswrapper[4826]: I0131 07:59:58.267158 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea"} Jan 31 07:59:58 crc kubenswrapper[4826]: I0131 07:59:58.267179 4826 scope.go:117] "RemoveContainer" containerID="316b3d553b5671d98b8431682183e3a4f3aaa9a7a42b0254a3052bbf98543c03" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.158876 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg"] Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.161000 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.165560 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.166000 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.180444 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg"] Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.277107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450cfa0c-8bd8-4400-8c22-044409770c26-secret-volume\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.277282 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxscv\" (UniqueName: \"kubernetes.io/projected/450cfa0c-8bd8-4400-8c22-044409770c26-kube-api-access-wxscv\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.277343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450cfa0c-8bd8-4400-8c22-044409770c26-config-volume\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.378507 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxscv\" (UniqueName: \"kubernetes.io/projected/450cfa0c-8bd8-4400-8c22-044409770c26-kube-api-access-wxscv\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.378848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450cfa0c-8bd8-4400-8c22-044409770c26-config-volume\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.378995 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450cfa0c-8bd8-4400-8c22-044409770c26-secret-volume\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.380571 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450cfa0c-8bd8-4400-8c22-044409770c26-config-volume\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.385160 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450cfa0c-8bd8-4400-8c22-044409770c26-secret-volume\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.399336 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxscv\" (UniqueName: \"kubernetes.io/projected/450cfa0c-8bd8-4400-8c22-044409770c26-kube-api-access-wxscv\") pod \"collect-profiles-29497440-74slg\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.490860 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:00 crc kubenswrapper[4826]: I0131 08:00:00.927247 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg"] Jan 31 08:00:01 crc kubenswrapper[4826]: I0131 08:00:01.298898 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" event={"ID":"450cfa0c-8bd8-4400-8c22-044409770c26","Type":"ContainerStarted","Data":"fd09d2464ee639233a99e74b310063a820f761e99a87b4f1ad3dbb46fd9f32eb"} Jan 31 08:00:02 crc kubenswrapper[4826]: I0131 08:00:02.308571 4826 generic.go:334] "Generic (PLEG): container finished" podID="450cfa0c-8bd8-4400-8c22-044409770c26" containerID="fd4c514394b6cab65fe5ed14d8932a539929896ea5ad0499552d24475af459ca" exitCode=0 Jan 31 08:00:02 crc kubenswrapper[4826]: I0131 08:00:02.308682 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" event={"ID":"450cfa0c-8bd8-4400-8c22-044409770c26","Type":"ContainerDied","Data":"fd4c514394b6cab65fe5ed14d8932a539929896ea5ad0499552d24475af459ca"} Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.669563 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.744557 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450cfa0c-8bd8-4400-8c22-044409770c26-secret-volume\") pod \"450cfa0c-8bd8-4400-8c22-044409770c26\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.744777 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxscv\" (UniqueName: \"kubernetes.io/projected/450cfa0c-8bd8-4400-8c22-044409770c26-kube-api-access-wxscv\") pod \"450cfa0c-8bd8-4400-8c22-044409770c26\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.744925 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450cfa0c-8bd8-4400-8c22-044409770c26-config-volume\") pod \"450cfa0c-8bd8-4400-8c22-044409770c26\" (UID: \"450cfa0c-8bd8-4400-8c22-044409770c26\") " Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.745857 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450cfa0c-8bd8-4400-8c22-044409770c26-config-volume" (OuterVolumeSpecName: "config-volume") pod "450cfa0c-8bd8-4400-8c22-044409770c26" (UID: "450cfa0c-8bd8-4400-8c22-044409770c26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.750690 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450cfa0c-8bd8-4400-8c22-044409770c26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "450cfa0c-8bd8-4400-8c22-044409770c26" (UID: "450cfa0c-8bd8-4400-8c22-044409770c26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.758488 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450cfa0c-8bd8-4400-8c22-044409770c26-kube-api-access-wxscv" (OuterVolumeSpecName: "kube-api-access-wxscv") pod "450cfa0c-8bd8-4400-8c22-044409770c26" (UID: "450cfa0c-8bd8-4400-8c22-044409770c26"). InnerVolumeSpecName "kube-api-access-wxscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.846764 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450cfa0c-8bd8-4400-8c22-044409770c26-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.846816 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxscv\" (UniqueName: \"kubernetes.io/projected/450cfa0c-8bd8-4400-8c22-044409770c26-kube-api-access-wxscv\") on node \"crc\" DevicePath \"\"" Jan 31 08:00:03 crc kubenswrapper[4826]: I0131 08:00:03.846828 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450cfa0c-8bd8-4400-8c22-044409770c26-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:00:04 crc kubenswrapper[4826]: I0131 08:00:04.325886 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" event={"ID":"450cfa0c-8bd8-4400-8c22-044409770c26","Type":"ContainerDied","Data":"fd09d2464ee639233a99e74b310063a820f761e99a87b4f1ad3dbb46fd9f32eb"} Jan 31 08:00:04 crc kubenswrapper[4826]: I0131 08:00:04.325930 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd09d2464ee639233a99e74b310063a820f761e99a87b4f1ad3dbb46fd9f32eb" Jan 31 08:00:04 crc kubenswrapper[4826]: I0131 08:00:04.325936 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.160197 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497441-5w8mr"] Jan 31 08:01:00 crc kubenswrapper[4826]: E0131 08:01:00.161252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450cfa0c-8bd8-4400-8c22-044409770c26" containerName="collect-profiles" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.161267 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="450cfa0c-8bd8-4400-8c22-044409770c26" containerName="collect-profiles" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.161489 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="450cfa0c-8bd8-4400-8c22-044409770c26" containerName="collect-profiles" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.162292 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.171724 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497441-5w8mr"] Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.232640 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mhx\" (UniqueName: \"kubernetes.io/projected/04acf005-673c-4a09-b98d-ab5bb3903c71-kube-api-access-v6mhx\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.232736 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-fernet-keys\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.232795 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-combined-ca-bundle\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.233259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-config-data\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.334910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-combined-ca-bundle\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.335051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-config-data\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.335090 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mhx\" (UniqueName: \"kubernetes.io/projected/04acf005-673c-4a09-b98d-ab5bb3903c71-kube-api-access-v6mhx\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.335151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-fernet-keys\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.341109 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-fernet-keys\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.341296 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-config-data\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.343538 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-combined-ca-bundle\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.358371 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mhx\" (UniqueName: \"kubernetes.io/projected/04acf005-673c-4a09-b98d-ab5bb3903c71-kube-api-access-v6mhx\") pod \"keystone-cron-29497441-5w8mr\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.494248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:00 crc kubenswrapper[4826]: I0131 08:01:00.951556 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497441-5w8mr"] Jan 31 08:01:01 crc kubenswrapper[4826]: I0131 08:01:01.894163 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-5w8mr" event={"ID":"04acf005-673c-4a09-b98d-ab5bb3903c71","Type":"ContainerStarted","Data":"a56c974da33859171f78bcdaf6bb2a5157c0dff2b3378ba2ed17a019b2125102"} Jan 31 08:01:01 crc kubenswrapper[4826]: I0131 08:01:01.894767 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-5w8mr" event={"ID":"04acf005-673c-4a09-b98d-ab5bb3903c71","Type":"ContainerStarted","Data":"2eb1c1d47a89879905fc3fc2c315087f5b80b20d6063cea66ece82d77b378a45"} Jan 31 08:01:01 crc kubenswrapper[4826]: I0131 08:01:01.917005 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497441-5w8mr" podStartSLOduration=1.9169607279999998 podStartE2EDuration="1.916960728s" podCreationTimestamp="2026-01-31 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:01:01.91422438 +0000 UTC m=+1493.768110739" watchObservedRunningTime="2026-01-31 08:01:01.916960728 +0000 UTC m=+1493.770847087" Jan 31 08:01:03 crc kubenswrapper[4826]: I0131 08:01:03.914605 4826 generic.go:334] "Generic (PLEG): container finished" podID="04acf005-673c-4a09-b98d-ab5bb3903c71" containerID="a56c974da33859171f78bcdaf6bb2a5157c0dff2b3378ba2ed17a019b2125102" exitCode=0 Jan 31 08:01:03 crc kubenswrapper[4826]: I0131 08:01:03.914704 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-5w8mr" event={"ID":"04acf005-673c-4a09-b98d-ab5bb3903c71","Type":"ContainerDied","Data":"a56c974da33859171f78bcdaf6bb2a5157c0dff2b3378ba2ed17a019b2125102"} Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.255122 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.383701 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mhx\" (UniqueName: \"kubernetes.io/projected/04acf005-673c-4a09-b98d-ab5bb3903c71-kube-api-access-v6mhx\") pod \"04acf005-673c-4a09-b98d-ab5bb3903c71\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.383780 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-fernet-keys\") pod \"04acf005-673c-4a09-b98d-ab5bb3903c71\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.383926 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-combined-ca-bundle\") pod \"04acf005-673c-4a09-b98d-ab5bb3903c71\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.384081 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-config-data\") pod \"04acf005-673c-4a09-b98d-ab5bb3903c71\" (UID: \"04acf005-673c-4a09-b98d-ab5bb3903c71\") " Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.390503 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04acf005-673c-4a09-b98d-ab5bb3903c71-kube-api-access-v6mhx" (OuterVolumeSpecName: "kube-api-access-v6mhx") pod "04acf005-673c-4a09-b98d-ab5bb3903c71" (UID: "04acf005-673c-4a09-b98d-ab5bb3903c71"). InnerVolumeSpecName "kube-api-access-v6mhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.395692 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "04acf005-673c-4a09-b98d-ab5bb3903c71" (UID: "04acf005-673c-4a09-b98d-ab5bb3903c71"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.411669 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04acf005-673c-4a09-b98d-ab5bb3903c71" (UID: "04acf005-673c-4a09-b98d-ab5bb3903c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.451708 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-config-data" (OuterVolumeSpecName: "config-data") pod "04acf005-673c-4a09-b98d-ab5bb3903c71" (UID: "04acf005-673c-4a09-b98d-ab5bb3903c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.486088 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6mhx\" (UniqueName: \"kubernetes.io/projected/04acf005-673c-4a09-b98d-ab5bb3903c71-kube-api-access-v6mhx\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.486118 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.486130 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.486142 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04acf005-673c-4a09-b98d-ab5bb3903c71-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.948905 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497441-5w8mr" event={"ID":"04acf005-673c-4a09-b98d-ab5bb3903c71","Type":"ContainerDied","Data":"2eb1c1d47a89879905fc3fc2c315087f5b80b20d6063cea66ece82d77b378a45"} Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.948951 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb1c1d47a89879905fc3fc2c315087f5b80b20d6063cea66ece82d77b378a45" Jan 31 08:01:05 crc kubenswrapper[4826]: I0131 08:01:05.949128 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497441-5w8mr" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.009158 4826 generic.go:334] "Generic (PLEG): container finished" podID="a84a288f-097a-4f5b-acee-09d5c7d34abf" containerID="a36090a5d834978450624d390a7ccf2a4cea44d603c5861188f9cd0542202b6f" exitCode=0 Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.009235 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" event={"ID":"a84a288f-097a-4f5b-acee-09d5c7d34abf","Type":"ContainerDied","Data":"a36090a5d834978450624d390a7ccf2a4cea44d603c5861188f9cd0542202b6f"} Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.811532 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kv7nr"] Jan 31 08:01:13 crc kubenswrapper[4826]: E0131 08:01:13.812605 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04acf005-673c-4a09-b98d-ab5bb3903c71" containerName="keystone-cron" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.812631 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="04acf005-673c-4a09-b98d-ab5bb3903c71" containerName="keystone-cron" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.812897 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="04acf005-673c-4a09-b98d-ab5bb3903c71" containerName="keystone-cron" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.814593 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.825425 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kv7nr"] Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.938137 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jcxl\" (UniqueName: \"kubernetes.io/projected/f660bf57-b482-4214-9c90-f1e7bd010101-kube-api-access-2jcxl\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.938270 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-utilities\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:13 crc kubenswrapper[4826]: I0131 08:01:13.938348 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-catalog-content\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.040120 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jcxl\" (UniqueName: \"kubernetes.io/projected/f660bf57-b482-4214-9c90-f1e7bd010101-kube-api-access-2jcxl\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.040201 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-utilities\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.040265 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-catalog-content\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.040791 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-catalog-content\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.040996 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-utilities\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.070081 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jcxl\" (UniqueName: \"kubernetes.io/projected/f660bf57-b482-4214-9c90-f1e7bd010101-kube-api-access-2jcxl\") pod \"community-operators-kv7nr\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.131404 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.472314 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.549005 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j62q\" (UniqueName: \"kubernetes.io/projected/a84a288f-097a-4f5b-acee-09d5c7d34abf-kube-api-access-5j62q\") pod \"a84a288f-097a-4f5b-acee-09d5c7d34abf\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.549343 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-inventory\") pod \"a84a288f-097a-4f5b-acee-09d5c7d34abf\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.549376 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-ssh-key-openstack-edpm-ipam\") pod \"a84a288f-097a-4f5b-acee-09d5c7d34abf\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.550081 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-bootstrap-combined-ca-bundle\") pod \"a84a288f-097a-4f5b-acee-09d5c7d34abf\" (UID: \"a84a288f-097a-4f5b-acee-09d5c7d34abf\") " Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.554005 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a84a288f-097a-4f5b-acee-09d5c7d34abf" (UID: "a84a288f-097a-4f5b-acee-09d5c7d34abf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.554590 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84a288f-097a-4f5b-acee-09d5c7d34abf-kube-api-access-5j62q" (OuterVolumeSpecName: "kube-api-access-5j62q") pod "a84a288f-097a-4f5b-acee-09d5c7d34abf" (UID: "a84a288f-097a-4f5b-acee-09d5c7d34abf"). InnerVolumeSpecName "kube-api-access-5j62q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.582018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-inventory" (OuterVolumeSpecName: "inventory") pod "a84a288f-097a-4f5b-acee-09d5c7d34abf" (UID: "a84a288f-097a-4f5b-acee-09d5c7d34abf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.588730 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a84a288f-097a-4f5b-acee-09d5c7d34abf" (UID: "a84a288f-097a-4f5b-acee-09d5c7d34abf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.652216 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.652262 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.652276 4826 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84a288f-097a-4f5b-acee-09d5c7d34abf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.652285 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j62q\" (UniqueName: \"kubernetes.io/projected/a84a288f-097a-4f5b-acee-09d5c7d34abf-kube-api-access-5j62q\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:14 crc kubenswrapper[4826]: I0131 08:01:14.760706 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kv7nr"] Jan 31 08:01:14 crc kubenswrapper[4826]: E0131 08:01:14.880632 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda84a288f_097a_4f5b_acee_09d5c7d34abf.slice\": RecentStats: unable to find data in memory cache]" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.026487 4826 generic.go:334] "Generic (PLEG): container finished" podID="f660bf57-b482-4214-9c90-f1e7bd010101" containerID="6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad" exitCode=0 Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.026686 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerDied","Data":"6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad"} Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.026993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerStarted","Data":"8fc393c1594762b35880811bb3bd596c5163ec709055a72991bcecfbe8c71dd8"} Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.028483 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" event={"ID":"a84a288f-097a-4f5b-acee-09d5c7d34abf","Type":"ContainerDied","Data":"8f209908f4cc18dca8a0bb352408c3a5fae6b07c194fece59cd4a39fd29e0cde"} Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.028543 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f209908f4cc18dca8a0bb352408c3a5fae6b07c194fece59cd4a39fd29e0cde" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.028611 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.029075 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.096603 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7"] Jan 31 08:01:15 crc kubenswrapper[4826]: E0131 08:01:15.097107 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84a288f-097a-4f5b-acee-09d5c7d34abf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.097124 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84a288f-097a-4f5b-acee-09d5c7d34abf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.097346 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84a288f-097a-4f5b-acee-09d5c7d34abf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.098100 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.100109 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.100246 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.100392 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.100518 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.105616 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7"] Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.160805 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6h9m\" (UniqueName: \"kubernetes.io/projected/34e8ee4d-b6b1-40b8-8012-e91226161f75-kube-api-access-t6h9m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.160911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.161259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.263742 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.263864 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6h9m\" (UniqueName: \"kubernetes.io/projected/34e8ee4d-b6b1-40b8-8012-e91226161f75-kube-api-access-t6h9m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.263895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.269719 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.269931 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.283286 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6h9m\" (UniqueName: \"kubernetes.io/projected/34e8ee4d-b6b1-40b8-8012-e91226161f75-kube-api-access-t6h9m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.445050 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:01:15 crc kubenswrapper[4826]: I0131 08:01:15.961061 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7"] Jan 31 08:01:15 crc kubenswrapper[4826]: W0131 08:01:15.965511 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34e8ee4d_b6b1_40b8_8012_e91226161f75.slice/crio-e0f3d4975393c198b015401967a8f70b67ff9a9da84c72d4675cf2b4ac3b29b1 WatchSource:0}: Error finding container e0f3d4975393c198b015401967a8f70b67ff9a9da84c72d4675cf2b4ac3b29b1: Status 404 returned error can't find the container with id e0f3d4975393c198b015401967a8f70b67ff9a9da84c72d4675cf2b4ac3b29b1 Jan 31 08:01:16 crc kubenswrapper[4826]: I0131 08:01:16.038806 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" event={"ID":"34e8ee4d-b6b1-40b8-8012-e91226161f75","Type":"ContainerStarted","Data":"e0f3d4975393c198b015401967a8f70b67ff9a9da84c72d4675cf2b4ac3b29b1"} Jan 31 08:01:16 crc kubenswrapper[4826]: I0131 08:01:16.041957 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerStarted","Data":"8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d"} Jan 31 08:01:17 crc kubenswrapper[4826]: I0131 08:01:17.054960 4826 generic.go:334] "Generic (PLEG): container finished" podID="f660bf57-b482-4214-9c90-f1e7bd010101" containerID="8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d" exitCode=0 Jan 31 08:01:17 crc kubenswrapper[4826]: I0131 08:01:17.055367 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerDied","Data":"8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d"} Jan 31 08:01:17 crc kubenswrapper[4826]: I0131 08:01:17.061828 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" event={"ID":"34e8ee4d-b6b1-40b8-8012-e91226161f75","Type":"ContainerStarted","Data":"2255a62fa3b70bc6a6821bd641820ee21eea584b083ec1359b78f50a96460c0c"} Jan 31 08:01:17 crc kubenswrapper[4826]: I0131 08:01:17.103693 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" podStartSLOduration=1.686581247 podStartE2EDuration="2.103675613s" podCreationTimestamp="2026-01-31 08:01:15 +0000 UTC" firstStartedPulling="2026-01-31 08:01:15.968101939 +0000 UTC m=+1507.821988378" lastFinishedPulling="2026-01-31 08:01:16.385196375 +0000 UTC m=+1508.239082744" observedRunningTime="2026-01-31 08:01:17.096495799 +0000 UTC m=+1508.950382168" watchObservedRunningTime="2026-01-31 08:01:17.103675613 +0000 UTC m=+1508.957561972" Jan 31 08:01:19 crc kubenswrapper[4826]: I0131 08:01:19.080740 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerStarted","Data":"036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de"} Jan 31 08:01:19 crc kubenswrapper[4826]: I0131 08:01:19.099975 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kv7nr" podStartSLOduration=3.149930573 podStartE2EDuration="6.099952907s" podCreationTimestamp="2026-01-31 08:01:13 +0000 UTC" firstStartedPulling="2026-01-31 08:01:15.028661114 +0000 UTC m=+1506.882547473" lastFinishedPulling="2026-01-31 08:01:17.978683438 +0000 UTC m=+1509.832569807" observedRunningTime="2026-01-31 08:01:19.096966832 +0000 UTC m=+1510.950853191" watchObservedRunningTime="2026-01-31 08:01:19.099952907 +0000 UTC m=+1510.953839266" Jan 31 08:01:24 crc kubenswrapper[4826]: I0131 08:01:24.132427 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:24 crc kubenswrapper[4826]: I0131 08:01:24.133138 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:24 crc kubenswrapper[4826]: I0131 08:01:24.176036 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:25 crc kubenswrapper[4826]: I0131 08:01:25.203079 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:25 crc kubenswrapper[4826]: I0131 08:01:25.253317 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kv7nr"] Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.156327 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kv7nr" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="registry-server" containerID="cri-o://036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de" gracePeriod=2 Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.623260 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.779522 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jcxl\" (UniqueName: \"kubernetes.io/projected/f660bf57-b482-4214-9c90-f1e7bd010101-kube-api-access-2jcxl\") pod \"f660bf57-b482-4214-9c90-f1e7bd010101\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.779883 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-utilities\") pod \"f660bf57-b482-4214-9c90-f1e7bd010101\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.780108 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-catalog-content\") pod \"f660bf57-b482-4214-9c90-f1e7bd010101\" (UID: \"f660bf57-b482-4214-9c90-f1e7bd010101\") " Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.780922 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-utilities" (OuterVolumeSpecName: "utilities") pod "f660bf57-b482-4214-9c90-f1e7bd010101" (UID: "f660bf57-b482-4214-9c90-f1e7bd010101"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.786172 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f660bf57-b482-4214-9c90-f1e7bd010101-kube-api-access-2jcxl" (OuterVolumeSpecName: "kube-api-access-2jcxl") pod "f660bf57-b482-4214-9c90-f1e7bd010101" (UID: "f660bf57-b482-4214-9c90-f1e7bd010101"). InnerVolumeSpecName "kube-api-access-2jcxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.832570 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f660bf57-b482-4214-9c90-f1e7bd010101" (UID: "f660bf57-b482-4214-9c90-f1e7bd010101"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.883019 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.883055 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jcxl\" (UniqueName: \"kubernetes.io/projected/f660bf57-b482-4214-9c90-f1e7bd010101-kube-api-access-2jcxl\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:27 crc kubenswrapper[4826]: I0131 08:01:27.883070 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f660bf57-b482-4214-9c90-f1e7bd010101-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.169241 4826 generic.go:334] "Generic (PLEG): container finished" podID="f660bf57-b482-4214-9c90-f1e7bd010101" containerID="036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de" exitCode=0 Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.169289 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerDied","Data":"036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de"} Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.169328 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv7nr" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.170270 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv7nr" event={"ID":"f660bf57-b482-4214-9c90-f1e7bd010101","Type":"ContainerDied","Data":"8fc393c1594762b35880811bb3bd596c5163ec709055a72991bcecfbe8c71dd8"} Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.170389 4826 scope.go:117] "RemoveContainer" containerID="036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.199210 4826 scope.go:117] "RemoveContainer" containerID="8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.202301 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kv7nr"] Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.213374 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kv7nr"] Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.221719 4826 scope.go:117] "RemoveContainer" containerID="6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.257682 4826 scope.go:117] "RemoveContainer" containerID="036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de" Jan 31 08:01:28 crc kubenswrapper[4826]: E0131 08:01:28.258476 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de\": container with ID starting with 036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de not found: ID does not exist" containerID="036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.258517 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de"} err="failed to get container status \"036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de\": rpc error: code = NotFound desc = could not find container \"036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de\": container with ID starting with 036433bd2f7f4cf5610f698e98c7ae70a3e2b41f17770df139b1c7b1e71772de not found: ID does not exist" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.258542 4826 scope.go:117] "RemoveContainer" containerID="8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d" Jan 31 08:01:28 crc kubenswrapper[4826]: E0131 08:01:28.259019 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d\": container with ID starting with 8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d not found: ID does not exist" containerID="8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.259049 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d"} err="failed to get container status \"8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d\": rpc error: code = NotFound desc = could not find container \"8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d\": container with ID starting with 8fd4f71e93c6ad592ab29c4b3aec8de578a774eb0c5a6848366e386d7c8ca71d not found: ID does not exist" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.259067 4826 scope.go:117] "RemoveContainer" containerID="6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad" Jan 31 08:01:28 crc kubenswrapper[4826]: E0131 08:01:28.259343 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad\": container with ID starting with 6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad not found: ID does not exist" containerID="6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.259436 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad"} err="failed to get container status \"6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad\": rpc error: code = NotFound desc = could not find container \"6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad\": container with ID starting with 6e6deb461fb99659eaf4dc13ee9c455c48d724ce7054cd5236b918e2b320a2ad not found: ID does not exist" Jan 31 08:01:28 crc kubenswrapper[4826]: I0131 08:01:28.819114 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" path="/var/lib/kubelet/pods/f660bf57-b482-4214-9c90-f1e7bd010101/volumes" Jan 31 08:01:29 crc kubenswrapper[4826]: I0131 08:01:29.669831 4826 scope.go:117] "RemoveContainer" containerID="29fd3ff41e9180ce0500f007b0ad64d6fe444e9ac0f56be68be307d9ec893169" Jan 31 08:01:29 crc kubenswrapper[4826]: I0131 08:01:29.695383 4826 scope.go:117] "RemoveContainer" containerID="ea8915165064a4c3c900c95cd1e0d492ae7bfdb310b0cab003cf2db2e9053331" Jan 31 08:01:29 crc kubenswrapper[4826]: I0131 08:01:29.716290 4826 scope.go:117] "RemoveContainer" containerID="a0915a22c10a1e1b2d448100a224323ec4c7f0fea512d37b0ac7c5c0716ca7d7" Jan 31 08:01:57 crc kubenswrapper[4826]: I0131 08:01:57.377243 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:01:57 crc kubenswrapper[4826]: I0131 08:01:57.377722 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:02:08 crc kubenswrapper[4826]: I0131 08:02:08.052323 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9887k"] Jan 31 08:02:08 crc kubenswrapper[4826]: I0131 08:02:08.067677 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nxksm"] Jan 31 08:02:08 crc kubenswrapper[4826]: I0131 08:02:08.072799 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9887k"] Jan 31 08:02:08 crc kubenswrapper[4826]: I0131 08:02:08.083858 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nxksm"] Jan 31 08:02:08 crc kubenswrapper[4826]: I0131 08:02:08.845767 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526b4807-5bd0-4aff-837d-31afeb09aef6" path="/var/lib/kubelet/pods/526b4807-5bd0-4aff-837d-31afeb09aef6/volumes" Jan 31 08:02:08 crc kubenswrapper[4826]: I0131 08:02:08.846542 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0832ed0-3245-4c91-875e-23f9d8307faf" path="/var/lib/kubelet/pods/e0832ed0-3245-4c91-875e-23f9d8307faf/volumes" Jan 31 08:02:09 crc kubenswrapper[4826]: I0131 08:02:09.053328 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9lhm4"] Jan 31 08:02:09 crc kubenswrapper[4826]: I0131 08:02:09.066585 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2a1e-account-create-update-s98bk"] Jan 31 08:02:09 crc kubenswrapper[4826]: I0131 08:02:09.078400 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2a1e-account-create-update-s98bk"] Jan 31 08:02:09 crc kubenswrapper[4826]: I0131 08:02:09.090426 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9lhm4"] Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.032379 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-00b3-account-create-update-jp5gs"] Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.047520 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-00b3-account-create-update-jp5gs"] Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.056116 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-dfdc-account-create-update-r5qt4"] Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.064146 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-dfdc-account-create-update-r5qt4"] Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.818254 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536279fb-86dc-4105-aca9-31abb3917b28" path="/var/lib/kubelet/pods/536279fb-86dc-4105-aca9-31abb3917b28/volumes" Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.819290 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3" path="/var/lib/kubelet/pods/6acbf5f6-ffdb-48e3-9ddf-b2667b4f84e3/volumes" Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.819778 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e4def3-bd04-4842-8f4d-d49888336a07" path="/var/lib/kubelet/pods/93e4def3-bd04-4842-8f4d-d49888336a07/volumes" Jan 31 08:02:10 crc kubenswrapper[4826]: I0131 08:02:10.820375 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de770cde-f91f-4153-bdff-54dd47878bd6" path="/var/lib/kubelet/pods/de770cde-f91f-4153-bdff-54dd47878bd6/volumes" Jan 31 08:02:21 crc kubenswrapper[4826]: I0131 08:02:21.054257 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fbq54"] Jan 31 08:02:21 crc kubenswrapper[4826]: I0131 08:02:21.066326 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fbq54"] Jan 31 08:02:22 crc kubenswrapper[4826]: I0131 08:02:22.818535 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e32372c-7dea-4ff2-84d9-d49002bc57d1" path="/var/lib/kubelet/pods/8e32372c-7dea-4ff2-84d9-d49002bc57d1/volumes" Jan 31 08:02:27 crc kubenswrapper[4826]: I0131 08:02:27.377049 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:02:27 crc kubenswrapper[4826]: I0131 08:02:27.378198 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:02:29 crc kubenswrapper[4826]: I0131 08:02:29.793201 4826 scope.go:117] "RemoveContainer" containerID="262fc9596d04fed9b91317f97fd2d0d40e79827140150552924c9d2f891d07af" Jan 31 08:02:29 crc kubenswrapper[4826]: I0131 08:02:29.831272 4826 scope.go:117] "RemoveContainer" containerID="cc652b9fad3ccc67c76ec6ad10f611a322df9bd22384aae32aec35d3aea80271" Jan 31 08:02:29 crc kubenswrapper[4826]: I0131 08:02:29.888556 4826 scope.go:117] "RemoveContainer" containerID="afba20f465478d29979122966484b03f9ba0617a3db22a81e9a1a76909d1bde5" Jan 31 08:02:29 crc kubenswrapper[4826]: I0131 08:02:29.919495 4826 scope.go:117] "RemoveContainer" containerID="d459fb0b2295af3265683150de512cd5389436fe306fda22e6f2e61d8fdcaf93" Jan 31 08:02:29 crc kubenswrapper[4826]: I0131 08:02:29.971281 4826 scope.go:117] "RemoveContainer" containerID="18b8bd5e1c631fcaafc41311e64600fa51376e728a96a4b138cb48f4f24a936b" Jan 31 08:02:30 crc kubenswrapper[4826]: I0131 08:02:30.005566 4826 scope.go:117] "RemoveContainer" containerID="91ed0842cca456b1df8d5365289f22ace51936bd5301af4976fe55d96def6a0c" Jan 31 08:02:30 crc kubenswrapper[4826]: I0131 08:02:30.027004 4826 scope.go:117] "RemoveContainer" containerID="8f5fe60153e2312dfd5e3ca4b33183a6b476b1bf5b914d47dfda155e8840e867" Jan 31 08:02:30 crc kubenswrapper[4826]: I0131 08:02:30.098056 4826 scope.go:117] "RemoveContainer" containerID="5bd3f229b8e2e4c9ff15c41a2298f019afb897ab3cfc3ed3b389de797d9adb56" Jan 31 08:02:30 crc kubenswrapper[4826]: I0131 08:02:30.116569 4826 scope.go:117] "RemoveContainer" containerID="8a49cc9d65af835722f8fe763b866f005db4f6b46dd562dd31848857f030af8f" Jan 31 08:02:34 crc kubenswrapper[4826]: I0131 08:02:34.762456 4826 generic.go:334] "Generic (PLEG): container finished" podID="34e8ee4d-b6b1-40b8-8012-e91226161f75" containerID="2255a62fa3b70bc6a6821bd641820ee21eea584b083ec1359b78f50a96460c0c" exitCode=0 Jan 31 08:02:34 crc kubenswrapper[4826]: I0131 08:02:34.762557 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" event={"ID":"34e8ee4d-b6b1-40b8-8012-e91226161f75","Type":"ContainerDied","Data":"2255a62fa3b70bc6a6821bd641820ee21eea584b083ec1359b78f50a96460c0c"} Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.190817 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.313605 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-inventory\") pod \"34e8ee4d-b6b1-40b8-8012-e91226161f75\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.313783 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-ssh-key-openstack-edpm-ipam\") pod \"34e8ee4d-b6b1-40b8-8012-e91226161f75\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.313852 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6h9m\" (UniqueName: \"kubernetes.io/projected/34e8ee4d-b6b1-40b8-8012-e91226161f75-kube-api-access-t6h9m\") pod \"34e8ee4d-b6b1-40b8-8012-e91226161f75\" (UID: \"34e8ee4d-b6b1-40b8-8012-e91226161f75\") " Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.320026 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e8ee4d-b6b1-40b8-8012-e91226161f75-kube-api-access-t6h9m" (OuterVolumeSpecName: "kube-api-access-t6h9m") pod "34e8ee4d-b6b1-40b8-8012-e91226161f75" (UID: "34e8ee4d-b6b1-40b8-8012-e91226161f75"). InnerVolumeSpecName "kube-api-access-t6h9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.340436 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-inventory" (OuterVolumeSpecName: "inventory") pod "34e8ee4d-b6b1-40b8-8012-e91226161f75" (UID: "34e8ee4d-b6b1-40b8-8012-e91226161f75"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.360469 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34e8ee4d-b6b1-40b8-8012-e91226161f75" (UID: "34e8ee4d-b6b1-40b8-8012-e91226161f75"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.415831 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6h9m\" (UniqueName: \"kubernetes.io/projected/34e8ee4d-b6b1-40b8-8012-e91226161f75-kube-api-access-t6h9m\") on node \"crc\" DevicePath \"\"" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.415859 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.415868 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34e8ee4d-b6b1-40b8-8012-e91226161f75-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.779370 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" event={"ID":"34e8ee4d-b6b1-40b8-8012-e91226161f75","Type":"ContainerDied","Data":"e0f3d4975393c198b015401967a8f70b67ff9a9da84c72d4675cf2b4ac3b29b1"} Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.779748 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f3d4975393c198b015401967a8f70b67ff9a9da84c72d4675cf2b4ac3b29b1" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.779425 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.865190 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l"] Jan 31 08:02:36 crc kubenswrapper[4826]: E0131 08:02:36.865689 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="registry-server" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.865711 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="registry-server" Jan 31 08:02:36 crc kubenswrapper[4826]: E0131 08:02:36.865724 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e8ee4d-b6b1-40b8-8012-e91226161f75" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.865733 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e8ee4d-b6b1-40b8-8012-e91226161f75" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:02:36 crc kubenswrapper[4826]: E0131 08:02:36.865761 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="extract-utilities" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.865771 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="extract-utilities" Jan 31 08:02:36 crc kubenswrapper[4826]: E0131 08:02:36.865779 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="extract-content" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.865786 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="extract-content" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.866215 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f660bf57-b482-4214-9c90-f1e7bd010101" containerName="registry-server" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.866250 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e8ee4d-b6b1-40b8-8012-e91226161f75" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.866878 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.870724 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.871042 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.871251 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.871395 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:02:36 crc kubenswrapper[4826]: I0131 08:02:36.876608 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l"] Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.025296 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs74x\" (UniqueName: \"kubernetes.io/projected/4404c1e7-39fa-4591-ae22-a8f0c26a4452-kube-api-access-qs74x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.025378 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.025456 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.127020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs74x\" (UniqueName: \"kubernetes.io/projected/4404c1e7-39fa-4591-ae22-a8f0c26a4452-kube-api-access-qs74x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.127184 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.127268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.132306 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.139016 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.145783 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs74x\" (UniqueName: \"kubernetes.io/projected/4404c1e7-39fa-4591-ae22-a8f0c26a4452-kube-api-access-qs74x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.222757 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.751551 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l"] Jan 31 08:02:37 crc kubenswrapper[4826]: I0131 08:02:37.788207 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" event={"ID":"4404c1e7-39fa-4591-ae22-a8f0c26a4452","Type":"ContainerStarted","Data":"77bf4692c9833eb466aaab61a2ee99897226938b77eed3ae0830e0e8bb7950b9"} Jan 31 08:02:38 crc kubenswrapper[4826]: I0131 08:02:38.798754 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" event={"ID":"4404c1e7-39fa-4591-ae22-a8f0c26a4452","Type":"ContainerStarted","Data":"42715357a5b8db10145739808c5fbc2fc66b2d2907039f17b25b35aa498a7219"} Jan 31 08:02:38 crc kubenswrapper[4826]: I0131 08:02:38.821733 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" podStartSLOduration=2.099109499 podStartE2EDuration="2.821715284s" podCreationTimestamp="2026-01-31 08:02:36 +0000 UTC" firstStartedPulling="2026-01-31 08:02:37.757203178 +0000 UTC m=+1589.611089547" lastFinishedPulling="2026-01-31 08:02:38.479808953 +0000 UTC m=+1590.333695332" observedRunningTime="2026-01-31 08:02:38.817035961 +0000 UTC m=+1590.670922340" watchObservedRunningTime="2026-01-31 08:02:38.821715284 +0000 UTC m=+1590.675601643" Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.040696 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9c4f-account-create-update-gccqs"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.054848 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9c4f-account-create-update-gccqs"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.061439 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-h4scr"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.068939 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cjn6r"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.078642 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-h4scr"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.086591 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cjn6r"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.093681 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0f7c-account-create-update-lr26l"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.100532 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0f7c-account-create-update-lr26l"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.107476 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xw82h"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.114872 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7548-account-create-update-9m89j"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.121415 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xw82h"] Jan 31 08:02:41 crc kubenswrapper[4826]: I0131 08:02:41.128188 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7548-account-create-update-9m89j"] Jan 31 08:02:42 crc kubenswrapper[4826]: I0131 08:02:42.819389 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34349938-3d1a-4df5-a6a2-b43beedb876f" path="/var/lib/kubelet/pods/34349938-3d1a-4df5-a6a2-b43beedb876f/volumes" Jan 31 08:02:42 crc kubenswrapper[4826]: I0131 08:02:42.820087 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f612133-f0fe-4418-be06-d50f6df59ea7" path="/var/lib/kubelet/pods/7f612133-f0fe-4418-be06-d50f6df59ea7/volumes" Jan 31 08:02:42 crc kubenswrapper[4826]: I0131 08:02:42.820722 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850f48ed-5da5-420a-8c60-20a5af3352b1" path="/var/lib/kubelet/pods/850f48ed-5da5-420a-8c60-20a5af3352b1/volumes" Jan 31 08:02:42 crc kubenswrapper[4826]: I0131 08:02:42.822174 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898f03be-9509-4645-b54c-bf988d058b35" path="/var/lib/kubelet/pods/898f03be-9509-4645-b54c-bf988d058b35/volumes" Jan 31 08:02:42 crc kubenswrapper[4826]: I0131 08:02:42.823318 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b" path="/var/lib/kubelet/pods/b83cf8ec-b8c1-4364-8dc3-e5a14e2cb65b/volumes" Jan 31 08:02:42 crc kubenswrapper[4826]: I0131 08:02:42.824185 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d319859f-f891-4be6-af4b-a067d72a9726" path="/var/lib/kubelet/pods/d319859f-f891-4be6-af4b-a067d72a9726/volumes" Jan 31 08:02:43 crc kubenswrapper[4826]: I0131 08:02:43.844057 4826 generic.go:334] "Generic (PLEG): container finished" podID="4404c1e7-39fa-4591-ae22-a8f0c26a4452" containerID="42715357a5b8db10145739808c5fbc2fc66b2d2907039f17b25b35aa498a7219" exitCode=0 Jan 31 08:02:43 crc kubenswrapper[4826]: I0131 08:02:43.844142 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" event={"ID":"4404c1e7-39fa-4591-ae22-a8f0c26a4452","Type":"ContainerDied","Data":"42715357a5b8db10145739808c5fbc2fc66b2d2907039f17b25b35aa498a7219"} Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.305805 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.476197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs74x\" (UniqueName: \"kubernetes.io/projected/4404c1e7-39fa-4591-ae22-a8f0c26a4452-kube-api-access-qs74x\") pod \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.476338 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-ssh-key-openstack-edpm-ipam\") pod \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.476517 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-inventory\") pod \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\" (UID: \"4404c1e7-39fa-4591-ae22-a8f0c26a4452\") " Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.486387 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4404c1e7-39fa-4591-ae22-a8f0c26a4452-kube-api-access-qs74x" (OuterVolumeSpecName: "kube-api-access-qs74x") pod "4404c1e7-39fa-4591-ae22-a8f0c26a4452" (UID: "4404c1e7-39fa-4591-ae22-a8f0c26a4452"). InnerVolumeSpecName "kube-api-access-qs74x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.507646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-inventory" (OuterVolumeSpecName: "inventory") pod "4404c1e7-39fa-4591-ae22-a8f0c26a4452" (UID: "4404c1e7-39fa-4591-ae22-a8f0c26a4452"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.509706 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4404c1e7-39fa-4591-ae22-a8f0c26a4452" (UID: "4404c1e7-39fa-4591-ae22-a8f0c26a4452"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.578412 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs74x\" (UniqueName: \"kubernetes.io/projected/4404c1e7-39fa-4591-ae22-a8f0c26a4452-kube-api-access-qs74x\") on node \"crc\" DevicePath \"\"" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.578447 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.578460 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4404c1e7-39fa-4591-ae22-a8f0c26a4452-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.863271 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" event={"ID":"4404c1e7-39fa-4591-ae22-a8f0c26a4452","Type":"ContainerDied","Data":"77bf4692c9833eb466aaab61a2ee99897226938b77eed3ae0830e0e8bb7950b9"} Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.863304 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.863335 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77bf4692c9833eb466aaab61a2ee99897226938b77eed3ae0830e0e8bb7950b9" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.927113 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8"] Jan 31 08:02:45 crc kubenswrapper[4826]: E0131 08:02:45.927470 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4404c1e7-39fa-4591-ae22-a8f0c26a4452" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.927488 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4404c1e7-39fa-4591-ae22-a8f0c26a4452" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.927680 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4404c1e7-39fa-4591-ae22-a8f0c26a4452" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.928238 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.930911 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.931185 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.931442 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.932233 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:02:45 crc kubenswrapper[4826]: I0131 08:02:45.947896 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8"] Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.029959 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-x66vb"] Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.038577 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fbmh2"] Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.075334 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fbmh2"] Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.083612 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-x66vb"] Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.089835 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.089917 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf2kv\" (UniqueName: \"kubernetes.io/projected/d233a435-0ea9-4b37-9293-9fa79cf36cf4-kube-api-access-xf2kv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.089993 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.192080 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.192476 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf2kv\" (UniqueName: \"kubernetes.io/projected/d233a435-0ea9-4b37-9293-9fa79cf36cf4-kube-api-access-xf2kv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.192551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.196036 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.197304 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.213755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf2kv\" (UniqueName: \"kubernetes.io/projected/d233a435-0ea9-4b37-9293-9fa79cf36cf4-kube-api-access-xf2kv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xnlm8\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.247938 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.825354 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a716e83-1782-4038-b26c-7a2d7ed6095d" path="/var/lib/kubelet/pods/4a716e83-1782-4038-b26c-7a2d7ed6095d/volumes" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.827320 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2" path="/var/lib/kubelet/pods/bfa04c1e-e73b-4ba1-b5b3-13bc4099c9e2/volumes" Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.858508 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8"] Jan 31 08:02:46 crc kubenswrapper[4826]: I0131 08:02:46.871641 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" event={"ID":"d233a435-0ea9-4b37-9293-9fa79cf36cf4","Type":"ContainerStarted","Data":"43039dc9760feb8e656e0ac6a83119c0e7e6cf3547df58ea20654574a5572691"} Jan 31 08:02:48 crc kubenswrapper[4826]: I0131 08:02:48.889301 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" event={"ID":"d233a435-0ea9-4b37-9293-9fa79cf36cf4","Type":"ContainerStarted","Data":"4c8f80de0e9e5fa9047112eccf97c396914ccd01d33bc1fb5e37033a991354d0"} Jan 31 08:02:48 crc kubenswrapper[4826]: I0131 08:02:48.911393 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" podStartSLOduration=2.901795538 podStartE2EDuration="3.911373534s" podCreationTimestamp="2026-01-31 08:02:45 +0000 UTC" firstStartedPulling="2026-01-31 08:02:46.853136571 +0000 UTC m=+1598.707022930" lastFinishedPulling="2026-01-31 08:02:47.862714557 +0000 UTC m=+1599.716600926" observedRunningTime="2026-01-31 08:02:48.910411687 +0000 UTC m=+1600.764298046" watchObservedRunningTime="2026-01-31 08:02:48.911373534 +0000 UTC m=+1600.765259893" Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.377720 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.378599 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.378658 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.379430 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.379495 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" gracePeriod=600 Jan 31 08:02:57 crc kubenswrapper[4826]: E0131 08:02:57.504120 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.973643 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" exitCode=0 Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.973705 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea"} Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.973752 4826 scope.go:117] "RemoveContainer" containerID="2996f1f03d04736e2fde3b29cb9be0c885dd95bf9e821597e845b3e75e0d6911" Jan 31 08:02:57 crc kubenswrapper[4826]: I0131 08:02:57.974684 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:02:57 crc kubenswrapper[4826]: E0131 08:02:57.975400 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:03:09 crc kubenswrapper[4826]: I0131 08:03:09.809015 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:03:09 crc kubenswrapper[4826]: E0131 08:03:09.810247 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:03:23 crc kubenswrapper[4826]: I0131 08:03:23.235928 4826 generic.go:334] "Generic (PLEG): container finished" podID="d233a435-0ea9-4b37-9293-9fa79cf36cf4" containerID="4c8f80de0e9e5fa9047112eccf97c396914ccd01d33bc1fb5e37033a991354d0" exitCode=0 Jan 31 08:03:23 crc kubenswrapper[4826]: I0131 08:03:23.236471 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" event={"ID":"d233a435-0ea9-4b37-9293-9fa79cf36cf4","Type":"ContainerDied","Data":"4c8f80de0e9e5fa9047112eccf97c396914ccd01d33bc1fb5e37033a991354d0"} Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.738461 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.808751 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:03:25 crc kubenswrapper[4826]: E0131 08:03:24.809067 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.844875 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-ssh-key-openstack-edpm-ipam\") pod \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.844958 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf2kv\" (UniqueName: \"kubernetes.io/projected/d233a435-0ea9-4b37-9293-9fa79cf36cf4-kube-api-access-xf2kv\") pod \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.845144 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-inventory\") pod \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\" (UID: \"d233a435-0ea9-4b37-9293-9fa79cf36cf4\") " Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.854312 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d233a435-0ea9-4b37-9293-9fa79cf36cf4-kube-api-access-xf2kv" (OuterVolumeSpecName: "kube-api-access-xf2kv") pod "d233a435-0ea9-4b37-9293-9fa79cf36cf4" (UID: "d233a435-0ea9-4b37-9293-9fa79cf36cf4"). InnerVolumeSpecName "kube-api-access-xf2kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.878222 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d233a435-0ea9-4b37-9293-9fa79cf36cf4" (UID: "d233a435-0ea9-4b37-9293-9fa79cf36cf4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.883391 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-inventory" (OuterVolumeSpecName: "inventory") pod "d233a435-0ea9-4b37-9293-9fa79cf36cf4" (UID: "d233a435-0ea9-4b37-9293-9fa79cf36cf4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.947497 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.947521 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf2kv\" (UniqueName: \"kubernetes.io/projected/d233a435-0ea9-4b37-9293-9fa79cf36cf4-kube-api-access-xf2kv\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:24.947533 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d233a435-0ea9-4b37-9293-9fa79cf36cf4-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.256130 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" event={"ID":"d233a435-0ea9-4b37-9293-9fa79cf36cf4","Type":"ContainerDied","Data":"43039dc9760feb8e656e0ac6a83119c0e7e6cf3547df58ea20654574a5572691"} Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.256163 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43039dc9760feb8e656e0ac6a83119c0e7e6cf3547df58ea20654574a5572691" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.256217 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.352022 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq"] Jan 31 08:03:25 crc kubenswrapper[4826]: E0131 08:03:25.352564 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d233a435-0ea9-4b37-9293-9fa79cf36cf4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.352579 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d233a435-0ea9-4b37-9293-9fa79cf36cf4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.352876 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d233a435-0ea9-4b37-9293-9fa79cf36cf4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.354049 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.357501 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.357784 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.357901 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.363353 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.364646 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq"] Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.456657 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.456760 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.456817 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxs4j\" (UniqueName: \"kubernetes.io/projected/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-kube-api-access-qxs4j\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.560596 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.560809 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.561136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxs4j\" (UniqueName: \"kubernetes.io/projected/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-kube-api-access-qxs4j\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.564728 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.565526 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.583033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxs4j\" (UniqueName: \"kubernetes.io/projected/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-kube-api-access-qxs4j\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:25 crc kubenswrapper[4826]: I0131 08:03:25.674077 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:26 crc kubenswrapper[4826]: I0131 08:03:26.180144 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq"] Jan 31 08:03:26 crc kubenswrapper[4826]: I0131 08:03:26.265360 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" event={"ID":"b6bd14ce-ddb1-478c-93d2-e69f2d21972e","Type":"ContainerStarted","Data":"317c44a38c53a61a5f42f2b578a4f344da968a9ff323d86829ae4e7bf426adf9"} Jan 31 08:03:28 crc kubenswrapper[4826]: I0131 08:03:28.284316 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" event={"ID":"b6bd14ce-ddb1-478c-93d2-e69f2d21972e","Type":"ContainerStarted","Data":"6e374ed0ed726fbc91a9634b665617c78b69f988f2b9f65687cbea96bcc86dfd"} Jan 31 08:03:28 crc kubenswrapper[4826]: I0131 08:03:28.311035 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" podStartSLOduration=1.6139983199999999 podStartE2EDuration="3.31101188s" podCreationTimestamp="2026-01-31 08:03:25 +0000 UTC" firstStartedPulling="2026-01-31 08:03:26.184938265 +0000 UTC m=+1638.038824624" lastFinishedPulling="2026-01-31 08:03:27.881951825 +0000 UTC m=+1639.735838184" observedRunningTime="2026-01-31 08:03:28.303816565 +0000 UTC m=+1640.157702944" watchObservedRunningTime="2026-01-31 08:03:28.31101188 +0000 UTC m=+1640.164898259" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.256592 4826 scope.go:117] "RemoveContainer" containerID="dcb54ceb14638425d7c3e1bd2540c556cff55386971c0e3c2230498a22c75892" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.294652 4826 scope.go:117] "RemoveContainer" containerID="ff91b9015bc6e98ca373c23c5552e404a0ac65d759c64082788acecfffb44106" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.329524 4826 scope.go:117] "RemoveContainer" containerID="6e5160ba12607769c5fe1222398e91b6fdf4f4ed5ea3261b33a71f78e9b62d8e" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.399329 4826 scope.go:117] "RemoveContainer" containerID="33cb353f6728c82219d9bc32cca6dd875419c9f47f07aef7e2267c6986be8836" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.433115 4826 scope.go:117] "RemoveContainer" containerID="be65d66a233cd00a88dd48466b6167ba4ac3e8397761529c95b6d2e17069f9d3" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.466234 4826 scope.go:117] "RemoveContainer" containerID="8975f27dd1393cf0ffcb8624c6e1455bff5a5cc07d3a5146dcade2569f0192b8" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.516553 4826 scope.go:117] "RemoveContainer" containerID="05e5ae902e9a3a42bf0327ca48436279bf82124a19ed3c715e48083592dc5445" Jan 31 08:03:30 crc kubenswrapper[4826]: I0131 08:03:30.540159 4826 scope.go:117] "RemoveContainer" containerID="ca2b2b1f3f77e0b002878a601fa78cc2cca5bf7dbfefae130994b2cf03675936" Jan 31 08:03:32 crc kubenswrapper[4826]: I0131 08:03:32.323021 4826 generic.go:334] "Generic (PLEG): container finished" podID="b6bd14ce-ddb1-478c-93d2-e69f2d21972e" containerID="6e374ed0ed726fbc91a9634b665617c78b69f988f2b9f65687cbea96bcc86dfd" exitCode=0 Jan 31 08:03:32 crc kubenswrapper[4826]: I0131 08:03:32.323185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" event={"ID":"b6bd14ce-ddb1-478c-93d2-e69f2d21972e","Type":"ContainerDied","Data":"6e374ed0ed726fbc91a9634b665617c78b69f988f2b9f65687cbea96bcc86dfd"} Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.791484 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.942148 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-ssh-key-openstack-edpm-ipam\") pod \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.942575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-inventory\") pod \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.942612 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxs4j\" (UniqueName: \"kubernetes.io/projected/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-kube-api-access-qxs4j\") pod \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\" (UID: \"b6bd14ce-ddb1-478c-93d2-e69f2d21972e\") " Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.948654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-kube-api-access-qxs4j" (OuterVolumeSpecName: "kube-api-access-qxs4j") pod "b6bd14ce-ddb1-478c-93d2-e69f2d21972e" (UID: "b6bd14ce-ddb1-478c-93d2-e69f2d21972e"). InnerVolumeSpecName "kube-api-access-qxs4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.977144 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6bd14ce-ddb1-478c-93d2-e69f2d21972e" (UID: "b6bd14ce-ddb1-478c-93d2-e69f2d21972e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:33 crc kubenswrapper[4826]: I0131 08:03:33.987790 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-inventory" (OuterVolumeSpecName: "inventory") pod "b6bd14ce-ddb1-478c-93d2-e69f2d21972e" (UID: "b6bd14ce-ddb1-478c-93d2-e69f2d21972e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.045110 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.045150 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.045162 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxs4j\" (UniqueName: \"kubernetes.io/projected/b6bd14ce-ddb1-478c-93d2-e69f2d21972e-kube-api-access-qxs4j\") on node \"crc\" DevicePath \"\"" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.344095 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" event={"ID":"b6bd14ce-ddb1-478c-93d2-e69f2d21972e","Type":"ContainerDied","Data":"317c44a38c53a61a5f42f2b578a4f344da968a9ff323d86829ae4e7bf426adf9"} Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.344140 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317c44a38c53a61a5f42f2b578a4f344da968a9ff323d86829ae4e7bf426adf9" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.344240 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.401816 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r"] Jan 31 08:03:34 crc kubenswrapper[4826]: E0131 08:03:34.402196 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bd14ce-ddb1-478c-93d2-e69f2d21972e" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.402214 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bd14ce-ddb1-478c-93d2-e69f2d21972e" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.402397 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bd14ce-ddb1-478c-93d2-e69f2d21972e" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.402961 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.405624 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.405955 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.406185 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.407686 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.415083 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r"] Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.553988 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.554098 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.554151 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/21086bdf-1c9c-408d-9b67-95b95ccd493d-kube-api-access-hz65q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.656069 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.656140 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/21086bdf-1c9c-408d-9b67-95b95ccd493d-kube-api-access-hz65q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.656238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.660667 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.660705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.673413 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/21086bdf-1c9c-408d-9b67-95b95ccd493d-kube-api-access-hz65q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blb8r\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:34 crc kubenswrapper[4826]: I0131 08:03:34.720759 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:03:35 crc kubenswrapper[4826]: I0131 08:03:35.224348 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r"] Jan 31 08:03:35 crc kubenswrapper[4826]: I0131 08:03:35.356648 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" event={"ID":"21086bdf-1c9c-408d-9b67-95b95ccd493d","Type":"ContainerStarted","Data":"85de408675a21f46a669697dcb7eb2f903fc0864ce73f13c71950922ba8ceaa8"} Jan 31 08:03:36 crc kubenswrapper[4826]: I0131 08:03:36.366529 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" event={"ID":"21086bdf-1c9c-408d-9b67-95b95ccd493d","Type":"ContainerStarted","Data":"8799070135d17a153e99dc422f02ba177446522201f6bdbf732206c38742e6c9"} Jan 31 08:03:36 crc kubenswrapper[4826]: I0131 08:03:36.397433 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" podStartSLOduration=1.861568112 podStartE2EDuration="2.397407199s" podCreationTimestamp="2026-01-31 08:03:34 +0000 UTC" firstStartedPulling="2026-01-31 08:03:35.225069188 +0000 UTC m=+1647.078955547" lastFinishedPulling="2026-01-31 08:03:35.760908255 +0000 UTC m=+1647.614794634" observedRunningTime="2026-01-31 08:03:36.382433674 +0000 UTC m=+1648.236320063" watchObservedRunningTime="2026-01-31 08:03:36.397407199 +0000 UTC m=+1648.251293588" Jan 31 08:03:38 crc kubenswrapper[4826]: I0131 08:03:38.054310 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4swtq"] Jan 31 08:03:38 crc kubenswrapper[4826]: I0131 08:03:38.063817 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4swtq"] Jan 31 08:03:38 crc kubenswrapper[4826]: I0131 08:03:38.824225 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc9cec2-3064-4a80-8621-f84c37994a96" path="/var/lib/kubelet/pods/8cc9cec2-3064-4a80-8621-f84c37994a96/volumes" Jan 31 08:03:39 crc kubenswrapper[4826]: I0131 08:03:39.809499 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:03:39 crc kubenswrapper[4826]: E0131 08:03:39.809862 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:03:41 crc kubenswrapper[4826]: I0131 08:03:41.039901 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-s6dqh"] Jan 31 08:03:41 crc kubenswrapper[4826]: I0131 08:03:41.057935 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-s6dqh"] Jan 31 08:03:42 crc kubenswrapper[4826]: I0131 08:03:42.825880 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d994c6b6-3dd2-4231-b80b-b83c88fa860f" path="/var/lib/kubelet/pods/d994c6b6-3dd2-4231-b80b-b83c88fa860f/volumes" Jan 31 08:03:50 crc kubenswrapper[4826]: I0131 08:03:50.028674 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zfz9q"] Jan 31 08:03:50 crc kubenswrapper[4826]: I0131 08:03:50.038558 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zfz9q"] Jan 31 08:03:50 crc kubenswrapper[4826]: I0131 08:03:50.809391 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:03:50 crc kubenswrapper[4826]: E0131 08:03:50.809623 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:03:50 crc kubenswrapper[4826]: I0131 08:03:50.819779 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d490ca-1c3d-4823-8d2f-b4e2fca83778" path="/var/lib/kubelet/pods/08d490ca-1c3d-4823-8d2f-b4e2fca83778/volumes" Jan 31 08:04:03 crc kubenswrapper[4826]: I0131 08:04:03.809196 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:04:03 crc kubenswrapper[4826]: E0131 08:04:03.810058 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:04:09 crc kubenswrapper[4826]: I0131 08:04:09.056685 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7wvpt"] Jan 31 08:04:09 crc kubenswrapper[4826]: I0131 08:04:09.064341 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7wvpt"] Jan 31 08:04:10 crc kubenswrapper[4826]: I0131 08:04:10.823325 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88d711d-a1fe-4114-955e-167684da9ecb" path="/var/lib/kubelet/pods/a88d711d-a1fe-4114-955e-167684da9ecb/volumes" Jan 31 08:04:16 crc kubenswrapper[4826]: I0131 08:04:16.809652 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:04:16 crc kubenswrapper[4826]: E0131 08:04:16.810686 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:04:21 crc kubenswrapper[4826]: I0131 08:04:21.746344 4826 generic.go:334] "Generic (PLEG): container finished" podID="21086bdf-1c9c-408d-9b67-95b95ccd493d" containerID="8799070135d17a153e99dc422f02ba177446522201f6bdbf732206c38742e6c9" exitCode=0 Jan 31 08:04:21 crc kubenswrapper[4826]: I0131 08:04:21.746461 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" event={"ID":"21086bdf-1c9c-408d-9b67-95b95ccd493d","Type":"ContainerDied","Data":"8799070135d17a153e99dc422f02ba177446522201f6bdbf732206c38742e6c9"} Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.176079 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.296787 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-ssh-key-openstack-edpm-ipam\") pod \"21086bdf-1c9c-408d-9b67-95b95ccd493d\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.296981 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-inventory\") pod \"21086bdf-1c9c-408d-9b67-95b95ccd493d\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.297023 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/21086bdf-1c9c-408d-9b67-95b95ccd493d-kube-api-access-hz65q\") pod \"21086bdf-1c9c-408d-9b67-95b95ccd493d\" (UID: \"21086bdf-1c9c-408d-9b67-95b95ccd493d\") " Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.302506 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21086bdf-1c9c-408d-9b67-95b95ccd493d-kube-api-access-hz65q" (OuterVolumeSpecName: "kube-api-access-hz65q") pod "21086bdf-1c9c-408d-9b67-95b95ccd493d" (UID: "21086bdf-1c9c-408d-9b67-95b95ccd493d"). InnerVolumeSpecName "kube-api-access-hz65q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.321768 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "21086bdf-1c9c-408d-9b67-95b95ccd493d" (UID: "21086bdf-1c9c-408d-9b67-95b95ccd493d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.325957 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-inventory" (OuterVolumeSpecName: "inventory") pod "21086bdf-1c9c-408d-9b67-95b95ccd493d" (UID: "21086bdf-1c9c-408d-9b67-95b95ccd493d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.398714 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.398749 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/21086bdf-1c9c-408d-9b67-95b95ccd493d-kube-api-access-hz65q\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.398761 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21086bdf-1c9c-408d-9b67-95b95ccd493d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.766062 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" event={"ID":"21086bdf-1c9c-408d-9b67-95b95ccd493d","Type":"ContainerDied","Data":"85de408675a21f46a669697dcb7eb2f903fc0864ce73f13c71950922ba8ceaa8"} Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.766102 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85de408675a21f46a669697dcb7eb2f903fc0864ce73f13c71950922ba8ceaa8" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.766119 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.851565 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sbw6s"] Jan 31 08:04:23 crc kubenswrapper[4826]: E0131 08:04:23.852070 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21086bdf-1c9c-408d-9b67-95b95ccd493d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.852093 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="21086bdf-1c9c-408d-9b67-95b95ccd493d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.852317 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="21086bdf-1c9c-408d-9b67-95b95ccd493d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.853102 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.856303 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.858884 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.859020 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.859277 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:04:23 crc kubenswrapper[4826]: I0131 08:04:23.867212 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sbw6s"] Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.015834 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pf7z\" (UniqueName: \"kubernetes.io/projected/f726af4d-0e38-4afa-bf72-effd72efa5a6-kube-api-access-9pf7z\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.015962 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.016132 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.117421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.117507 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.117577 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pf7z\" (UniqueName: \"kubernetes.io/projected/f726af4d-0e38-4afa-bf72-effd72efa5a6-kube-api-access-9pf7z\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.120856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.121474 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.143786 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pf7z\" (UniqueName: \"kubernetes.io/projected/f726af4d-0e38-4afa-bf72-effd72efa5a6-kube-api-access-9pf7z\") pod \"ssh-known-hosts-edpm-deployment-sbw6s\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.171333 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.662110 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sbw6s"] Jan 31 08:04:24 crc kubenswrapper[4826]: I0131 08:04:24.774865 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" event={"ID":"f726af4d-0e38-4afa-bf72-effd72efa5a6","Type":"ContainerStarted","Data":"cadf99d3df64f8576d67eda956dc09a220a3e6c3f18ca88fce819a3eba8118c2"} Jan 31 08:04:25 crc kubenswrapper[4826]: I0131 08:04:25.785019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" event={"ID":"f726af4d-0e38-4afa-bf72-effd72efa5a6","Type":"ContainerStarted","Data":"c54294f595ed189f6604c48a8da10bd9e5d7f1ceac06d74e94ec2eb4abfdbe88"} Jan 31 08:04:25 crc kubenswrapper[4826]: I0131 08:04:25.802716 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" podStartSLOduration=2.123705324 podStartE2EDuration="2.802696955s" podCreationTimestamp="2026-01-31 08:04:23 +0000 UTC" firstStartedPulling="2026-01-31 08:04:24.679368846 +0000 UTC m=+1696.533255205" lastFinishedPulling="2026-01-31 08:04:25.358360477 +0000 UTC m=+1697.212246836" observedRunningTime="2026-01-31 08:04:25.800180444 +0000 UTC m=+1697.654066803" watchObservedRunningTime="2026-01-31 08:04:25.802696955 +0000 UTC m=+1697.656583314" Jan 31 08:04:27 crc kubenswrapper[4826]: I0131 08:04:27.039753 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9tmn2"] Jan 31 08:04:27 crc kubenswrapper[4826]: I0131 08:04:27.047846 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9tmn2"] Jan 31 08:04:28 crc kubenswrapper[4826]: I0131 08:04:28.821828 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:04:28 crc kubenswrapper[4826]: E0131 08:04:28.822271 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:04:28 crc kubenswrapper[4826]: I0131 08:04:28.822561 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f870e24-0e35-4ee6-805b-f81617554dc2" path="/var/lib/kubelet/pods/5f870e24-0e35-4ee6-805b-f81617554dc2/volumes" Jan 31 08:04:30 crc kubenswrapper[4826]: I0131 08:04:30.701058 4826 scope.go:117] "RemoveContainer" containerID="6430c1a2d7552a478c284e52145067d9f399ba42e28b9104d38881f5089df21f" Jan 31 08:04:30 crc kubenswrapper[4826]: I0131 08:04:30.785729 4826 scope.go:117] "RemoveContainer" containerID="88f75ce2f7fe85a623b4c0ac927e45c9c8cebe12719557f5d69e1d9ddd4aeb7f" Jan 31 08:04:31 crc kubenswrapper[4826]: I0131 08:04:31.078230 4826 scope.go:117] "RemoveContainer" containerID="6b98c2944e3fdc39da50df7f79ca2ff48ee1e3069a0e2528c32706cf8f89b727" Jan 31 08:04:31 crc kubenswrapper[4826]: I0131 08:04:31.208070 4826 scope.go:117] "RemoveContainer" containerID="e399ecb8bb0c54125335d07b85223e893ce5cb5bd269a44baa2826ddae9db5dc" Jan 31 08:04:31 crc kubenswrapper[4826]: I0131 08:04:31.326749 4826 scope.go:117] "RemoveContainer" containerID="6bfb7cae2f9574ea8239470c7fa42ec08a47a4f302cb2ddc409a1e7d904a9217" Jan 31 08:04:32 crc kubenswrapper[4826]: I0131 08:04:32.880007 4826 generic.go:334] "Generic (PLEG): container finished" podID="f726af4d-0e38-4afa-bf72-effd72efa5a6" containerID="c54294f595ed189f6604c48a8da10bd9e5d7f1ceac06d74e94ec2eb4abfdbe88" exitCode=0 Jan 31 08:04:32 crc kubenswrapper[4826]: I0131 08:04:32.880111 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" event={"ID":"f726af4d-0e38-4afa-bf72-effd72efa5a6","Type":"ContainerDied","Data":"c54294f595ed189f6604c48a8da10bd9e5d7f1ceac06d74e94ec2eb4abfdbe88"} Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.356360 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.539756 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-inventory-0\") pod \"f726af4d-0e38-4afa-bf72-effd72efa5a6\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.539939 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-ssh-key-openstack-edpm-ipam\") pod \"f726af4d-0e38-4afa-bf72-effd72efa5a6\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.540128 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pf7z\" (UniqueName: \"kubernetes.io/projected/f726af4d-0e38-4afa-bf72-effd72efa5a6-kube-api-access-9pf7z\") pod \"f726af4d-0e38-4afa-bf72-effd72efa5a6\" (UID: \"f726af4d-0e38-4afa-bf72-effd72efa5a6\") " Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.546765 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f726af4d-0e38-4afa-bf72-effd72efa5a6-kube-api-access-9pf7z" (OuterVolumeSpecName: "kube-api-access-9pf7z") pod "f726af4d-0e38-4afa-bf72-effd72efa5a6" (UID: "f726af4d-0e38-4afa-bf72-effd72efa5a6"). InnerVolumeSpecName "kube-api-access-9pf7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.567619 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f726af4d-0e38-4afa-bf72-effd72efa5a6" (UID: "f726af4d-0e38-4afa-bf72-effd72efa5a6"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.592704 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f726af4d-0e38-4afa-bf72-effd72efa5a6" (UID: "f726af4d-0e38-4afa-bf72-effd72efa5a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.642650 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pf7z\" (UniqueName: \"kubernetes.io/projected/f726af4d-0e38-4afa-bf72-effd72efa5a6-kube-api-access-9pf7z\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.642705 4826 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.642720 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f726af4d-0e38-4afa-bf72-effd72efa5a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.899111 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" event={"ID":"f726af4d-0e38-4afa-bf72-effd72efa5a6","Type":"ContainerDied","Data":"cadf99d3df64f8576d67eda956dc09a220a3e6c3f18ca88fce819a3eba8118c2"} Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.899176 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cadf99d3df64f8576d67eda956dc09a220a3e6c3f18ca88fce819a3eba8118c2" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.899313 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sbw6s" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.984936 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7"] Jan 31 08:04:34 crc kubenswrapper[4826]: E0131 08:04:34.985430 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f726af4d-0e38-4afa-bf72-effd72efa5a6" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.985453 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f726af4d-0e38-4afa-bf72-effd72efa5a6" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.985665 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f726af4d-0e38-4afa-bf72-effd72efa5a6" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.986375 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.989116 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.989455 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.989740 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:04:34 crc kubenswrapper[4826]: I0131 08:04:34.989742 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.006061 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7"] Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.153444 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmrs\" (UniqueName: \"kubernetes.io/projected/ad5a3527-00c3-4ad0-bc80-884479558924-kube-api-access-6wmrs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.153638 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.153832 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.255686 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.255798 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.255990 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmrs\" (UniqueName: \"kubernetes.io/projected/ad5a3527-00c3-4ad0-bc80-884479558924-kube-api-access-6wmrs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.261816 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.262001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.274892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmrs\" (UniqueName: \"kubernetes.io/projected/ad5a3527-00c3-4ad0-bc80-884479558924-kube-api-access-6wmrs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fppv7\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.302867 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.809602 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7"] Jan 31 08:04:35 crc kubenswrapper[4826]: I0131 08:04:35.909138 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" event={"ID":"ad5a3527-00c3-4ad0-bc80-884479558924","Type":"ContainerStarted","Data":"e2df2645373c346a758e20859bc1afe72579f5fbba34f79ec8a7f6b9d5c4e31b"} Jan 31 08:04:36 crc kubenswrapper[4826]: I0131 08:04:36.919181 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" event={"ID":"ad5a3527-00c3-4ad0-bc80-884479558924","Type":"ContainerStarted","Data":"22dc4b0a039b5d80b0c1dda03e56a49c3df336b5d03402f6beeec466ee119e69"} Jan 31 08:04:36 crc kubenswrapper[4826]: I0131 08:04:36.947141 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" podStartSLOduration=2.565131175 podStartE2EDuration="2.947123733s" podCreationTimestamp="2026-01-31 08:04:34 +0000 UTC" firstStartedPulling="2026-01-31 08:04:35.817436544 +0000 UTC m=+1707.671322903" lastFinishedPulling="2026-01-31 08:04:36.199429102 +0000 UTC m=+1708.053315461" observedRunningTime="2026-01-31 08:04:36.935956496 +0000 UTC m=+1708.789842875" watchObservedRunningTime="2026-01-31 08:04:36.947123733 +0000 UTC m=+1708.801010092" Jan 31 08:04:40 crc kubenswrapper[4826]: I0131 08:04:40.034281 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-k75dt"] Jan 31 08:04:40 crc kubenswrapper[4826]: I0131 08:04:40.048716 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-k75dt"] Jan 31 08:04:40 crc kubenswrapper[4826]: I0131 08:04:40.820885 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3626f21a-c324-4d3f-9aad-3248a07896da" path="/var/lib/kubelet/pods/3626f21a-c324-4d3f-9aad-3248a07896da/volumes" Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.029907 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dl84z"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.043154 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5029-account-create-update-h2wlb"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.051748 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-42b8-account-create-update-tm77r"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.059767 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c85b-account-create-update-7kdb6"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.066824 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bcjr8"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.073797 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5029-account-create-update-h2wlb"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.080645 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-42b8-account-create-update-tm77r"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.087860 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dl84z"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.096231 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c85b-account-create-update-7kdb6"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.103124 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bcjr8"] Jan 31 08:04:41 crc kubenswrapper[4826]: I0131 08:04:41.809680 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:04:41 crc kubenswrapper[4826]: E0131 08:04:41.810471 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:04:42 crc kubenswrapper[4826]: I0131 08:04:42.821580 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c647ed-ef58-46b5-a994-0cff67c161cc" path="/var/lib/kubelet/pods/24c647ed-ef58-46b5-a994-0cff67c161cc/volumes" Jan 31 08:04:42 crc kubenswrapper[4826]: I0131 08:04:42.823354 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2deb97a0-5661-4b3f-b0b1-84162e452fa2" path="/var/lib/kubelet/pods/2deb97a0-5661-4b3f-b0b1-84162e452fa2/volumes" Jan 31 08:04:42 crc kubenswrapper[4826]: I0131 08:04:42.824141 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b55564-5b4c-48e4-958c-33a815964af3" path="/var/lib/kubelet/pods/65b55564-5b4c-48e4-958c-33a815964af3/volumes" Jan 31 08:04:42 crc kubenswrapper[4826]: I0131 08:04:42.824851 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd771d42-12ea-494a-805a-90133b43e0c3" path="/var/lib/kubelet/pods/cd771d42-12ea-494a-805a-90133b43e0c3/volumes" Jan 31 08:04:42 crc kubenswrapper[4826]: I0131 08:04:42.826305 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81eab18-a2c9-4435-aaad-98f5c7666fb2" path="/var/lib/kubelet/pods/f81eab18-a2c9-4435-aaad-98f5c7666fb2/volumes" Jan 31 08:04:45 crc kubenswrapper[4826]: I0131 08:04:45.001521 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad5a3527-00c3-4ad0-bc80-884479558924" containerID="22dc4b0a039b5d80b0c1dda03e56a49c3df336b5d03402f6beeec466ee119e69" exitCode=0 Jan 31 08:04:45 crc kubenswrapper[4826]: I0131 08:04:45.001648 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" event={"ID":"ad5a3527-00c3-4ad0-bc80-884479558924","Type":"ContainerDied","Data":"22dc4b0a039b5d80b0c1dda03e56a49c3df336b5d03402f6beeec466ee119e69"} Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.403239 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.576821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-ssh-key-openstack-edpm-ipam\") pod \"ad5a3527-00c3-4ad0-bc80-884479558924\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.577102 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-inventory\") pod \"ad5a3527-00c3-4ad0-bc80-884479558924\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.577215 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmrs\" (UniqueName: \"kubernetes.io/projected/ad5a3527-00c3-4ad0-bc80-884479558924-kube-api-access-6wmrs\") pod \"ad5a3527-00c3-4ad0-bc80-884479558924\" (UID: \"ad5a3527-00c3-4ad0-bc80-884479558924\") " Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.586246 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5a3527-00c3-4ad0-bc80-884479558924-kube-api-access-6wmrs" (OuterVolumeSpecName: "kube-api-access-6wmrs") pod "ad5a3527-00c3-4ad0-bc80-884479558924" (UID: "ad5a3527-00c3-4ad0-bc80-884479558924"). InnerVolumeSpecName "kube-api-access-6wmrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.629388 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad5a3527-00c3-4ad0-bc80-884479558924" (UID: "ad5a3527-00c3-4ad0-bc80-884479558924"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.629794 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-inventory" (OuterVolumeSpecName: "inventory") pod "ad5a3527-00c3-4ad0-bc80-884479558924" (UID: "ad5a3527-00c3-4ad0-bc80-884479558924"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.679073 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wmrs\" (UniqueName: \"kubernetes.io/projected/ad5a3527-00c3-4ad0-bc80-884479558924-kube-api-access-6wmrs\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.679111 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:46 crc kubenswrapper[4826]: I0131 08:04:46.679124 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad5a3527-00c3-4ad0-bc80-884479558924-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.017805 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" event={"ID":"ad5a3527-00c3-4ad0-bc80-884479558924","Type":"ContainerDied","Data":"e2df2645373c346a758e20859bc1afe72579f5fbba34f79ec8a7f6b9d5c4e31b"} Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.018158 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2df2645373c346a758e20859bc1afe72579f5fbba34f79ec8a7f6b9d5c4e31b" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.017839 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.082869 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s"] Jan 31 08:04:47 crc kubenswrapper[4826]: E0131 08:04:47.083333 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5a3527-00c3-4ad0-bc80-884479558924" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.083358 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5a3527-00c3-4ad0-bc80-884479558924" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.083574 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5a3527-00c3-4ad0-bc80-884479558924" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.084342 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.086653 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.086695 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.088196 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.088534 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.101397 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s"] Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.187146 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.187184 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrws\" (UniqueName: \"kubernetes.io/projected/455fb360-2d9b-4501-a640-2014364869d8-kube-api-access-mxrws\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.187206 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.289726 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.289780 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrws\" (UniqueName: \"kubernetes.io/projected/455fb360-2d9b-4501-a640-2014364869d8-kube-api-access-mxrws\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.289813 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.294740 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.302579 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.305867 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrws\" (UniqueName: \"kubernetes.io/projected/455fb360-2d9b-4501-a640-2014364869d8-kube-api-access-mxrws\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.412888 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:04:47 crc kubenswrapper[4826]: I0131 08:04:47.923610 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s"] Jan 31 08:04:48 crc kubenswrapper[4826]: I0131 08:04:48.027304 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" event={"ID":"455fb360-2d9b-4501-a640-2014364869d8","Type":"ContainerStarted","Data":"ecdbe31bdd60fd66c8ddae2d331744a95b91106d15773589408ebc0b05284ecf"} Jan 31 08:04:52 crc kubenswrapper[4826]: I0131 08:04:52.809889 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:04:52 crc kubenswrapper[4826]: E0131 08:04:52.812394 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:04:55 crc kubenswrapper[4826]: I0131 08:04:55.099996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" event={"ID":"455fb360-2d9b-4501-a640-2014364869d8","Type":"ContainerStarted","Data":"1387c5bcb998ec51467039c29f417d51e53d5911d8d3536271a006b9a9f6f810"} Jan 31 08:04:55 crc kubenswrapper[4826]: I0131 08:04:55.127663 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" podStartSLOduration=1.7058899589999998 podStartE2EDuration="8.12764458s" podCreationTimestamp="2026-01-31 08:04:47 +0000 UTC" firstStartedPulling="2026-01-31 08:04:47.935303847 +0000 UTC m=+1719.789190206" lastFinishedPulling="2026-01-31 08:04:54.357058428 +0000 UTC m=+1726.210944827" observedRunningTime="2026-01-31 08:04:55.117629835 +0000 UTC m=+1726.971516194" watchObservedRunningTime="2026-01-31 08:04:55.12764458 +0000 UTC m=+1726.981530949" Jan 31 08:05:04 crc kubenswrapper[4826]: I0131 08:05:04.197906 4826 generic.go:334] "Generic (PLEG): container finished" podID="455fb360-2d9b-4501-a640-2014364869d8" containerID="1387c5bcb998ec51467039c29f417d51e53d5911d8d3536271a006b9a9f6f810" exitCode=0 Jan 31 08:05:04 crc kubenswrapper[4826]: I0131 08:05:04.198070 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" event={"ID":"455fb360-2d9b-4501-a640-2014364869d8","Type":"ContainerDied","Data":"1387c5bcb998ec51467039c29f417d51e53d5911d8d3536271a006b9a9f6f810"} Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.583090 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.760427 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-inventory\") pod \"455fb360-2d9b-4501-a640-2014364869d8\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.760598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxrws\" (UniqueName: \"kubernetes.io/projected/455fb360-2d9b-4501-a640-2014364869d8-kube-api-access-mxrws\") pod \"455fb360-2d9b-4501-a640-2014364869d8\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.760645 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-ssh-key-openstack-edpm-ipam\") pod \"455fb360-2d9b-4501-a640-2014364869d8\" (UID: \"455fb360-2d9b-4501-a640-2014364869d8\") " Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.766864 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455fb360-2d9b-4501-a640-2014364869d8-kube-api-access-mxrws" (OuterVolumeSpecName: "kube-api-access-mxrws") pod "455fb360-2d9b-4501-a640-2014364869d8" (UID: "455fb360-2d9b-4501-a640-2014364869d8"). InnerVolumeSpecName "kube-api-access-mxrws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.786937 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "455fb360-2d9b-4501-a640-2014364869d8" (UID: "455fb360-2d9b-4501-a640-2014364869d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.789200 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-inventory" (OuterVolumeSpecName: "inventory") pod "455fb360-2d9b-4501-a640-2014364869d8" (UID: "455fb360-2d9b-4501-a640-2014364869d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.809076 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:05:05 crc kubenswrapper[4826]: E0131 08:05:05.809368 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.864275 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxrws\" (UniqueName: \"kubernetes.io/projected/455fb360-2d9b-4501-a640-2014364869d8-kube-api-access-mxrws\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.864304 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:05 crc kubenswrapper[4826]: I0131 08:05:05.864315 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/455fb360-2d9b-4501-a640-2014364869d8-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:05:06 crc kubenswrapper[4826]: I0131 08:05:06.216676 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" event={"ID":"455fb360-2d9b-4501-a640-2014364869d8","Type":"ContainerDied","Data":"ecdbe31bdd60fd66c8ddae2d331744a95b91106d15773589408ebc0b05284ecf"} Jan 31 08:05:06 crc kubenswrapper[4826]: I0131 08:05:06.216740 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecdbe31bdd60fd66c8ddae2d331744a95b91106d15773589408ebc0b05284ecf" Jan 31 08:05:06 crc kubenswrapper[4826]: I0131 08:05:06.216750 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s" Jan 31 08:05:11 crc kubenswrapper[4826]: I0131 08:05:11.038175 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l4j5z"] Jan 31 08:05:11 crc kubenswrapper[4826]: I0131 08:05:11.045783 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l4j5z"] Jan 31 08:05:12 crc kubenswrapper[4826]: I0131 08:05:12.842982 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8baa3438-0c11-4a8c-b397-85247a6252c1" path="/var/lib/kubelet/pods/8baa3438-0c11-4a8c-b397-85247a6252c1/volumes" Jan 31 08:05:16 crc kubenswrapper[4826]: I0131 08:05:16.809407 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:05:16 crc kubenswrapper[4826]: E0131 08:05:16.810317 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:05:27 crc kubenswrapper[4826]: I0131 08:05:27.814805 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:05:27 crc kubenswrapper[4826]: E0131 08:05:27.816331 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.504018 4826 scope.go:117] "RemoveContainer" containerID="e375b0197ffa51a3a7a69f241c55205daa768004278c6e241098c50922fa0923" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.535306 4826 scope.go:117] "RemoveContainer" containerID="18f0b800ce8addb3708e02e5fe72bfa09726ffc4b63bb36cdb5098bd2d0e3390" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.573101 4826 scope.go:117] "RemoveContainer" containerID="ff4d1fbd5b1b5701a4d74c3fb7fe2d270fa59e6988caf318f4b6fcd175da6b0d" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.607294 4826 scope.go:117] "RemoveContainer" containerID="142e6fc54b1d556e8615b29228030913bfa9bf6e5e794e62616c9c16b9179162" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.664347 4826 scope.go:117] "RemoveContainer" containerID="b65790feb94c49780bfc5da71cf55a52ddc65d6e856154463be13e8c65640638" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.699182 4826 scope.go:117] "RemoveContainer" containerID="1b281edfb48cfd659fc2422d95860172600ff946e274e48b862976d33755c91e" Jan 31 08:05:31 crc kubenswrapper[4826]: I0131 08:05:31.730742 4826 scope.go:117] "RemoveContainer" containerID="bbc42394c4dd588b491417a8215cd6b56107c6f164776f0828370c82e7beaea1" Jan 31 08:05:35 crc kubenswrapper[4826]: I0131 08:05:35.040460 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlfm5"] Jan 31 08:05:35 crc kubenswrapper[4826]: I0131 08:05:35.048090 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlfm5"] Jan 31 08:05:36 crc kubenswrapper[4826]: I0131 08:05:36.822068 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883150ac-2f32-44c0-af19-2d5b94f385eb" path="/var/lib/kubelet/pods/883150ac-2f32-44c0-af19-2d5b94f385eb/volumes" Jan 31 08:05:41 crc kubenswrapper[4826]: I0131 08:05:41.061615 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-llhxq"] Jan 31 08:05:41 crc kubenswrapper[4826]: I0131 08:05:41.069224 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-llhxq"] Jan 31 08:05:42 crc kubenswrapper[4826]: I0131 08:05:42.809393 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:05:42 crc kubenswrapper[4826]: E0131 08:05:42.810450 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:05:42 crc kubenswrapper[4826]: I0131 08:05:42.819881 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5eb55e-27a4-4e01-b087-590ba6ff5421" path="/var/lib/kubelet/pods/2a5eb55e-27a4-4e01-b087-590ba6ff5421/volumes" Jan 31 08:05:54 crc kubenswrapper[4826]: I0131 08:05:54.808494 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:05:54 crc kubenswrapper[4826]: E0131 08:05:54.809406 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:06:05 crc kubenswrapper[4826]: I0131 08:06:05.809556 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:06:05 crc kubenswrapper[4826]: E0131 08:06:05.810420 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:06:19 crc kubenswrapper[4826]: I0131 08:06:19.809799 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:06:19 crc kubenswrapper[4826]: E0131 08:06:19.810999 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:06:20 crc kubenswrapper[4826]: I0131 08:06:20.042145 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvhwh"] Jan 31 08:06:20 crc kubenswrapper[4826]: I0131 08:06:20.051258 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvhwh"] Jan 31 08:06:20 crc kubenswrapper[4826]: I0131 08:06:20.821265 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c688d561-57f0-42dd-9559-ca31e0086d13" path="/var/lib/kubelet/pods/c688d561-57f0-42dd-9559-ca31e0086d13/volumes" Jan 31 08:06:30 crc kubenswrapper[4826]: I0131 08:06:30.809333 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:06:30 crc kubenswrapper[4826]: E0131 08:06:30.810222 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:06:31 crc kubenswrapper[4826]: I0131 08:06:31.871387 4826 scope.go:117] "RemoveContainer" containerID="55a3ca03a956bc935cc2a85cc654217e356a65297f73ce3f8e6d29ed6b83bb20" Jan 31 08:06:31 crc kubenswrapper[4826]: I0131 08:06:31.915919 4826 scope.go:117] "RemoveContainer" containerID="17247f34afb957b4a609780bce8bb7b71620028e3bf0838bc8f96e5b0ae52c94" Jan 31 08:06:31 crc kubenswrapper[4826]: I0131 08:06:31.961712 4826 scope.go:117] "RemoveContainer" containerID="0b4199b6aa9ce1136e187c461a738802c4ef1857f1e254384144d7272aa059f7" Jan 31 08:06:45 crc kubenswrapper[4826]: I0131 08:06:45.809560 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:06:45 crc kubenswrapper[4826]: E0131 08:06:45.810525 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:06:59 crc kubenswrapper[4826]: I0131 08:06:59.810065 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:06:59 crc kubenswrapper[4826]: E0131 08:06:59.812754 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:07:14 crc kubenswrapper[4826]: I0131 08:07:14.810203 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:07:14 crc kubenswrapper[4826]: E0131 08:07:14.810849 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:07:27 crc kubenswrapper[4826]: I0131 08:07:27.809109 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:07:27 crc kubenswrapper[4826]: E0131 08:07:27.809912 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:07:39 crc kubenswrapper[4826]: I0131 08:07:39.808710 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:07:39 crc kubenswrapper[4826]: E0131 08:07:39.810086 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.625763 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5czjt"] Jan 31 08:07:43 crc kubenswrapper[4826]: E0131 08:07:43.626557 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455fb360-2d9b-4501-a640-2014364869d8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.626569 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="455fb360-2d9b-4501-a640-2014364869d8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.626736 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="455fb360-2d9b-4501-a640-2014364869d8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.627841 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.642368 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5czjt"] Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.741553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-catalog-content\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.741657 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-utilities\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.741692 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vf5\" (UniqueName: \"kubernetes.io/projected/19f335dc-ea2b-4f73-8e00-44eea16bac71-kube-api-access-88vf5\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.822376 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hkdqf"] Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.824277 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.836475 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkdqf"] Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.843823 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-catalog-content\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.843909 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-utilities\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.843955 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vf5\" (UniqueName: \"kubernetes.io/projected/19f335dc-ea2b-4f73-8e00-44eea16bac71-kube-api-access-88vf5\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.844381 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-catalog-content\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.844491 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-utilities\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.885063 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vf5\" (UniqueName: \"kubernetes.io/projected/19f335dc-ea2b-4f73-8e00-44eea16bac71-kube-api-access-88vf5\") pod \"redhat-marketplace-5czjt\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.945330 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-catalog-content\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.946195 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnq5\" (UniqueName: \"kubernetes.io/projected/eed566a6-01e0-4b0a-9469-4103b45716f4-kube-api-access-dqnq5\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.946334 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-utilities\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:43 crc kubenswrapper[4826]: I0131 08:07:43.954396 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.050721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-catalog-content\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.051119 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnq5\" (UniqueName: \"kubernetes.io/projected/eed566a6-01e0-4b0a-9469-4103b45716f4-kube-api-access-dqnq5\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.051198 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-utilities\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.051470 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-catalog-content\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.051631 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-utilities\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.079229 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnq5\" (UniqueName: \"kubernetes.io/projected/eed566a6-01e0-4b0a-9469-4103b45716f4-kube-api-access-dqnq5\") pod \"redhat-operators-hkdqf\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.139414 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.421270 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5czjt"] Jan 31 08:07:44 crc kubenswrapper[4826]: W0131 08:07:44.604902 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeed566a6_01e0_4b0a_9469_4103b45716f4.slice/crio-74b0bb1cfed7fd3bccd39858daa61f0c957dabf5f52f583a28824101f86b3987 WatchSource:0}: Error finding container 74b0bb1cfed7fd3bccd39858daa61f0c957dabf5f52f583a28824101f86b3987: Status 404 returned error can't find the container with id 74b0bb1cfed7fd3bccd39858daa61f0c957dabf5f52f583a28824101f86b3987 Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.608050 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkdqf"] Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.685816 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerStarted","Data":"74b0bb1cfed7fd3bccd39858daa61f0c957dabf5f52f583a28824101f86b3987"} Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.688263 4826 generic.go:334] "Generic (PLEG): container finished" podID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerID="7cb0ce0fed71e477f145381a92e688f0509e6197352b346001707c707dffe081" exitCode=0 Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.688315 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5czjt" event={"ID":"19f335dc-ea2b-4f73-8e00-44eea16bac71","Type":"ContainerDied","Data":"7cb0ce0fed71e477f145381a92e688f0509e6197352b346001707c707dffe081"} Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.688347 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5czjt" event={"ID":"19f335dc-ea2b-4f73-8e00-44eea16bac71","Type":"ContainerStarted","Data":"26503a257bf230ca5b0590ea15cc37f56c438bd0972886769c6751b06b00f4d8"} Jan 31 08:07:44 crc kubenswrapper[4826]: I0131 08:07:44.690704 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:07:45 crc kubenswrapper[4826]: I0131 08:07:45.700092 4826 generic.go:334] "Generic (PLEG): container finished" podID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerID="91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d" exitCode=0 Jan 31 08:07:45 crc kubenswrapper[4826]: I0131 08:07:45.700202 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerDied","Data":"91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d"} Jan 31 08:07:46 crc kubenswrapper[4826]: I0131 08:07:46.829062 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4f8sc"] Jan 31 08:07:46 crc kubenswrapper[4826]: I0131 08:07:46.832926 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:46 crc kubenswrapper[4826]: I0131 08:07:46.840293 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4f8sc"] Jan 31 08:07:46 crc kubenswrapper[4826]: I0131 08:07:46.915930 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-utilities\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:46 crc kubenswrapper[4826]: I0131 08:07:46.916042 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-catalog-content\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:46 crc kubenswrapper[4826]: I0131 08:07:46.916161 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf5gp\" (UniqueName: \"kubernetes.io/projected/b1cad763-f535-4f78-a67a-b1b3d5ab3383-kube-api-access-lf5gp\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.017913 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-utilities\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.017985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-catalog-content\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.018034 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf5gp\" (UniqueName: \"kubernetes.io/projected/b1cad763-f535-4f78-a67a-b1b3d5ab3383-kube-api-access-lf5gp\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.018595 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-utilities\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.018780 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-catalog-content\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.041987 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf5gp\" (UniqueName: \"kubernetes.io/projected/b1cad763-f535-4f78-a67a-b1b3d5ab3383-kube-api-access-lf5gp\") pod \"certified-operators-4f8sc\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.174231 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.721608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerStarted","Data":"ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce"} Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.723863 4826 generic.go:334] "Generic (PLEG): container finished" podID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerID="bb44de3c82c70ec41f4504f4611c0994f04495f14ec29d293a8a67b9dad3d5c3" exitCode=0 Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.723902 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5czjt" event={"ID":"19f335dc-ea2b-4f73-8e00-44eea16bac71","Type":"ContainerDied","Data":"bb44de3c82c70ec41f4504f4611c0994f04495f14ec29d293a8a67b9dad3d5c3"} Jan 31 08:07:47 crc kubenswrapper[4826]: I0131 08:07:47.740291 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4f8sc"] Jan 31 08:07:48 crc kubenswrapper[4826]: I0131 08:07:48.733166 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerStarted","Data":"8b9dc039f99152f5606f9a5b9be319d69c47f5955d654c3302c5956d17f0fdfe"} Jan 31 08:07:53 crc kubenswrapper[4826]: I0131 08:07:53.809440 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:07:53 crc kubenswrapper[4826]: E0131 08:07:53.810031 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:07:54 crc kubenswrapper[4826]: I0131 08:07:54.782499 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerStarted","Data":"3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db"} Jan 31 08:07:54 crc kubenswrapper[4826]: I0131 08:07:54.785116 4826 generic.go:334] "Generic (PLEG): container finished" podID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerID="ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce" exitCode=0 Jan 31 08:07:54 crc kubenswrapper[4826]: I0131 08:07:54.785166 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerDied","Data":"ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce"} Jan 31 08:07:55 crc kubenswrapper[4826]: I0131 08:07:55.795369 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerID="3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db" exitCode=0 Jan 31 08:07:55 crc kubenswrapper[4826]: I0131 08:07:55.795421 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerDied","Data":"3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db"} Jan 31 08:07:59 crc kubenswrapper[4826]: I0131 08:07:59.839957 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5czjt" event={"ID":"19f335dc-ea2b-4f73-8e00-44eea16bac71","Type":"ContainerStarted","Data":"31d30a0d4287d1253f7c878329a4acecc0cd87d29934400bde1f6547c589c05b"} Jan 31 08:07:59 crc kubenswrapper[4826]: I0131 08:07:59.863105 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5czjt" podStartSLOduration=3.117957034 podStartE2EDuration="16.863087402s" podCreationTimestamp="2026-01-31 08:07:43 +0000 UTC" firstStartedPulling="2026-01-31 08:07:44.690465232 +0000 UTC m=+1896.544351591" lastFinishedPulling="2026-01-31 08:07:58.4355956 +0000 UTC m=+1910.289481959" observedRunningTime="2026-01-31 08:07:59.855588298 +0000 UTC m=+1911.709474677" watchObservedRunningTime="2026-01-31 08:07:59.863087402 +0000 UTC m=+1911.716973781" Jan 31 08:08:01 crc kubenswrapper[4826]: I0131 08:08:01.860648 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerStarted","Data":"53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a"} Jan 31 08:08:01 crc kubenswrapper[4826]: I0131 08:08:01.887304 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hkdqf" podStartSLOduration=3.637346368 podStartE2EDuration="18.887287473s" podCreationTimestamp="2026-01-31 08:07:43 +0000 UTC" firstStartedPulling="2026-01-31 08:07:45.703707596 +0000 UTC m=+1897.557593955" lastFinishedPulling="2026-01-31 08:08:00.953648691 +0000 UTC m=+1912.807535060" observedRunningTime="2026-01-31 08:08:01.880403327 +0000 UTC m=+1913.734289696" watchObservedRunningTime="2026-01-31 08:08:01.887287473 +0000 UTC m=+1913.741173832" Jan 31 08:08:02 crc kubenswrapper[4826]: I0131 08:08:02.872170 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerID="eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69" exitCode=0 Jan 31 08:08:02 crc kubenswrapper[4826]: I0131 08:08:02.872282 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerDied","Data":"eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69"} Jan 31 08:08:03 crc kubenswrapper[4826]: I0131 08:08:03.954870 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:08:03 crc kubenswrapper[4826]: I0131 08:08:03.955861 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:08:04 crc kubenswrapper[4826]: I0131 08:08:04.001533 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:08:04 crc kubenswrapper[4826]: I0131 08:08:04.140141 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:08:04 crc kubenswrapper[4826]: I0131 08:08:04.140213 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:08:04 crc kubenswrapper[4826]: I0131 08:08:04.895066 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerStarted","Data":"d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae"} Jan 31 08:08:04 crc kubenswrapper[4826]: I0131 08:08:04.950100 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:08:05 crc kubenswrapper[4826]: I0131 08:08:05.009575 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5czjt"] Jan 31 08:08:05 crc kubenswrapper[4826]: I0131 08:08:05.186590 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkdqf" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" probeResult="failure" output=< Jan 31 08:08:05 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:08:05 crc kubenswrapper[4826]: > Jan 31 08:08:05 crc kubenswrapper[4826]: I0131 08:08:05.931466 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4f8sc" podStartSLOduration=13.54380224 podStartE2EDuration="19.931447685s" podCreationTimestamp="2026-01-31 08:07:46 +0000 UTC" firstStartedPulling="2026-01-31 08:07:57.814681462 +0000 UTC m=+1909.668567821" lastFinishedPulling="2026-01-31 08:08:04.202326907 +0000 UTC m=+1916.056213266" observedRunningTime="2026-01-31 08:08:05.922715247 +0000 UTC m=+1917.776601626" watchObservedRunningTime="2026-01-31 08:08:05.931447685 +0000 UTC m=+1917.785334034" Jan 31 08:08:06 crc kubenswrapper[4826]: I0131 08:08:06.808634 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:08:06 crc kubenswrapper[4826]: I0131 08:08:06.907842 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5czjt" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="registry-server" containerID="cri-o://31d30a0d4287d1253f7c878329a4acecc0cd87d29934400bde1f6547c589c05b" gracePeriod=2 Jan 31 08:08:07 crc kubenswrapper[4826]: I0131 08:08:07.175152 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:08:07 crc kubenswrapper[4826]: I0131 08:08:07.175872 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:08:07 crc kubenswrapper[4826]: I0131 08:08:07.221526 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:08:07 crc kubenswrapper[4826]: I0131 08:08:07.972257 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"ab691b619de71e82ee7b4f8aac2ac93883f05b3da2dd0cdc5dc1c271473c6187"} Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.016641 4826 generic.go:334] "Generic (PLEG): container finished" podID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerID="31d30a0d4287d1253f7c878329a4acecc0cd87d29934400bde1f6547c589c05b" exitCode=0 Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.022346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5czjt" event={"ID":"19f335dc-ea2b-4f73-8e00-44eea16bac71","Type":"ContainerDied","Data":"31d30a0d4287d1253f7c878329a4acecc0cd87d29934400bde1f6547c589c05b"} Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.022445 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5czjt" event={"ID":"19f335dc-ea2b-4f73-8e00-44eea16bac71","Type":"ContainerDied","Data":"26503a257bf230ca5b0590ea15cc37f56c438bd0972886769c6751b06b00f4d8"} Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.022466 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26503a257bf230ca5b0590ea15cc37f56c438bd0972886769c6751b06b00f4d8" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.040584 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.058437 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-utilities\") pod \"19f335dc-ea2b-4f73-8e00-44eea16bac71\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.058504 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88vf5\" (UniqueName: \"kubernetes.io/projected/19f335dc-ea2b-4f73-8e00-44eea16bac71-kube-api-access-88vf5\") pod \"19f335dc-ea2b-4f73-8e00-44eea16bac71\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.058536 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-catalog-content\") pod \"19f335dc-ea2b-4f73-8e00-44eea16bac71\" (UID: \"19f335dc-ea2b-4f73-8e00-44eea16bac71\") " Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.074986 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-utilities" (OuterVolumeSpecName: "utilities") pod "19f335dc-ea2b-4f73-8e00-44eea16bac71" (UID: "19f335dc-ea2b-4f73-8e00-44eea16bac71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.089217 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f335dc-ea2b-4f73-8e00-44eea16bac71-kube-api-access-88vf5" (OuterVolumeSpecName: "kube-api-access-88vf5") pod "19f335dc-ea2b-4f73-8e00-44eea16bac71" (UID: "19f335dc-ea2b-4f73-8e00-44eea16bac71"). InnerVolumeSpecName "kube-api-access-88vf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.089383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19f335dc-ea2b-4f73-8e00-44eea16bac71" (UID: "19f335dc-ea2b-4f73-8e00-44eea16bac71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.160368 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.160408 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f335dc-ea2b-4f73-8e00-44eea16bac71-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:08 crc kubenswrapper[4826]: I0131 08:08:08.160418 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88vf5\" (UniqueName: \"kubernetes.io/projected/19f335dc-ea2b-4f73-8e00-44eea16bac71-kube-api-access-88vf5\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:09 crc kubenswrapper[4826]: I0131 08:08:09.024652 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5czjt" Jan 31 08:08:09 crc kubenswrapper[4826]: I0131 08:08:09.052833 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5czjt"] Jan 31 08:08:09 crc kubenswrapper[4826]: I0131 08:08:09.062929 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5czjt"] Jan 31 08:08:10 crc kubenswrapper[4826]: I0131 08:08:10.821105 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" path="/var/lib/kubelet/pods/19f335dc-ea2b-4f73-8e00-44eea16bac71/volumes" Jan 31 08:08:15 crc kubenswrapper[4826]: I0131 08:08:15.193219 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkdqf" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" probeResult="failure" output=< Jan 31 08:08:15 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:08:15 crc kubenswrapper[4826]: > Jan 31 08:08:17 crc kubenswrapper[4826]: I0131 08:08:17.270191 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:08:17 crc kubenswrapper[4826]: I0131 08:08:17.320363 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4f8sc"] Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.109541 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4f8sc" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="registry-server" containerID="cri-o://d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae" gracePeriod=2 Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.638312 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.698400 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-utilities\") pod \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.698585 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-catalog-content\") pod \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.698839 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf5gp\" (UniqueName: \"kubernetes.io/projected/b1cad763-f535-4f78-a67a-b1b3d5ab3383-kube-api-access-lf5gp\") pod \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\" (UID: \"b1cad763-f535-4f78-a67a-b1b3d5ab3383\") " Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.699513 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-utilities" (OuterVolumeSpecName: "utilities") pod "b1cad763-f535-4f78-a67a-b1b3d5ab3383" (UID: "b1cad763-f535-4f78-a67a-b1b3d5ab3383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.707252 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1cad763-f535-4f78-a67a-b1b3d5ab3383-kube-api-access-lf5gp" (OuterVolumeSpecName: "kube-api-access-lf5gp") pod "b1cad763-f535-4f78-a67a-b1b3d5ab3383" (UID: "b1cad763-f535-4f78-a67a-b1b3d5ab3383"). InnerVolumeSpecName "kube-api-access-lf5gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.753453 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1cad763-f535-4f78-a67a-b1b3d5ab3383" (UID: "b1cad763-f535-4f78-a67a-b1b3d5ab3383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.800906 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf5gp\" (UniqueName: \"kubernetes.io/projected/b1cad763-f535-4f78-a67a-b1b3d5ab3383-kube-api-access-lf5gp\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.800961 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:18 crc kubenswrapper[4826]: I0131 08:08:18.800997 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1cad763-f535-4f78-a67a-b1b3d5ab3383-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.120913 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerID="d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae" exitCode=0 Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.121001 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerDied","Data":"d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae"} Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.121043 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f8sc" event={"ID":"b1cad763-f535-4f78-a67a-b1b3d5ab3383","Type":"ContainerDied","Data":"8b9dc039f99152f5606f9a5b9be319d69c47f5955d654c3302c5956d17f0fdfe"} Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.121049 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f8sc" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.121067 4826 scope.go:117] "RemoveContainer" containerID="d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.148064 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4f8sc"] Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.156959 4826 scope.go:117] "RemoveContainer" containerID="eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.157436 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4f8sc"] Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.185404 4826 scope.go:117] "RemoveContainer" containerID="3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.227513 4826 scope.go:117] "RemoveContainer" containerID="d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae" Jan 31 08:08:19 crc kubenswrapper[4826]: E0131 08:08:19.228125 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae\": container with ID starting with d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae not found: ID does not exist" containerID="d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.228225 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae"} err="failed to get container status \"d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae\": rpc error: code = NotFound desc = could not find container \"d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae\": container with ID starting with d9d9297909726573ba613cadbb00a943b4a0e58f2cc087812e66860accc700ae not found: ID does not exist" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.229795 4826 scope.go:117] "RemoveContainer" containerID="eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69" Jan 31 08:08:19 crc kubenswrapper[4826]: E0131 08:08:19.230667 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69\": container with ID starting with eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69 not found: ID does not exist" containerID="eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.230714 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69"} err="failed to get container status \"eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69\": rpc error: code = NotFound desc = could not find container \"eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69\": container with ID starting with eb6d634034942e36c75bf90f90bd646ea407e407f8286b0c621fba74a1f98b69 not found: ID does not exist" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.230750 4826 scope.go:117] "RemoveContainer" containerID="3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db" Jan 31 08:08:19 crc kubenswrapper[4826]: E0131 08:08:19.231442 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db\": container with ID starting with 3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db not found: ID does not exist" containerID="3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db" Jan 31 08:08:19 crc kubenswrapper[4826]: I0131 08:08:19.231537 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db"} err="failed to get container status \"3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db\": rpc error: code = NotFound desc = could not find container \"3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db\": container with ID starting with 3868fbdc5e0cb45583951d21a3f4e9ab39bc8d604aa341ed609d4d83574162db not found: ID does not exist" Jan 31 08:08:20 crc kubenswrapper[4826]: I0131 08:08:20.829190 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" path="/var/lib/kubelet/pods/b1cad763-f535-4f78-a67a-b1b3d5ab3383/volumes" Jan 31 08:08:25 crc kubenswrapper[4826]: I0131 08:08:25.189502 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkdqf" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" probeResult="failure" output=< Jan 31 08:08:25 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:08:25 crc kubenswrapper[4826]: > Jan 31 08:08:35 crc kubenswrapper[4826]: I0131 08:08:35.182872 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkdqf" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" probeResult="failure" output=< Jan 31 08:08:35 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:08:35 crc kubenswrapper[4826]: > Jan 31 08:08:45 crc kubenswrapper[4826]: I0131 08:08:45.187235 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkdqf" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" probeResult="failure" output=< Jan 31 08:08:45 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:08:45 crc kubenswrapper[4826]: > Jan 31 08:08:54 crc kubenswrapper[4826]: I0131 08:08:54.196159 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:08:54 crc kubenswrapper[4826]: I0131 08:08:54.246180 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:08:54 crc kubenswrapper[4826]: I0131 08:08:54.434403 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkdqf"] Jan 31 08:08:55 crc kubenswrapper[4826]: I0131 08:08:55.400689 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hkdqf" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" containerID="cri-o://53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a" gracePeriod=2 Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.411579 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.413948 4826 generic.go:334] "Generic (PLEG): container finished" podID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerID="53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a" exitCode=0 Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.414019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerDied","Data":"53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a"} Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.414057 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkdqf" event={"ID":"eed566a6-01e0-4b0a-9469-4103b45716f4","Type":"ContainerDied","Data":"74b0bb1cfed7fd3bccd39858daa61f0c957dabf5f52f583a28824101f86b3987"} Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.414078 4826 scope.go:117] "RemoveContainer" containerID="53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.459507 4826 scope.go:117] "RemoveContainer" containerID="ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.485121 4826 scope.go:117] "RemoveContainer" containerID="91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.513340 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqnq5\" (UniqueName: \"kubernetes.io/projected/eed566a6-01e0-4b0a-9469-4103b45716f4-kube-api-access-dqnq5\") pod \"eed566a6-01e0-4b0a-9469-4103b45716f4\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.513422 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-utilities\") pod \"eed566a6-01e0-4b0a-9469-4103b45716f4\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.513541 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-catalog-content\") pod \"eed566a6-01e0-4b0a-9469-4103b45716f4\" (UID: \"eed566a6-01e0-4b0a-9469-4103b45716f4\") " Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.514157 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-utilities" (OuterVolumeSpecName: "utilities") pod "eed566a6-01e0-4b0a-9469-4103b45716f4" (UID: "eed566a6-01e0-4b0a-9469-4103b45716f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.514260 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.522896 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed566a6-01e0-4b0a-9469-4103b45716f4-kube-api-access-dqnq5" (OuterVolumeSpecName: "kube-api-access-dqnq5") pod "eed566a6-01e0-4b0a-9469-4103b45716f4" (UID: "eed566a6-01e0-4b0a-9469-4103b45716f4"). InnerVolumeSpecName "kube-api-access-dqnq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.535935 4826 scope.go:117] "RemoveContainer" containerID="53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a" Jan 31 08:08:56 crc kubenswrapper[4826]: E0131 08:08:56.544517 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a\": container with ID starting with 53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a not found: ID does not exist" containerID="53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.544589 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a"} err="failed to get container status \"53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a\": rpc error: code = NotFound desc = could not find container \"53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a\": container with ID starting with 53932173f158d50cfc17762a3b881cc529a2683661acf1690ecf4179a700fa6a not found: ID does not exist" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.544623 4826 scope.go:117] "RemoveContainer" containerID="ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce" Jan 31 08:08:56 crc kubenswrapper[4826]: E0131 08:08:56.545038 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce\": container with ID starting with ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce not found: ID does not exist" containerID="ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.545069 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce"} err="failed to get container status \"ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce\": rpc error: code = NotFound desc = could not find container \"ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce\": container with ID starting with ab8d3c9d422f0a4d36f195b9f98e672a4c03406c83c4a15864bede64aadcd4ce not found: ID does not exist" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.545091 4826 scope.go:117] "RemoveContainer" containerID="91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d" Jan 31 08:08:56 crc kubenswrapper[4826]: E0131 08:08:56.545298 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d\": container with ID starting with 91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d not found: ID does not exist" containerID="91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.545326 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d"} err="failed to get container status \"91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d\": rpc error: code = NotFound desc = could not find container \"91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d\": container with ID starting with 91c2d55e96b41a9ace0450bca22375caca4efb48326090a7f725bbff3a407b6d not found: ID does not exist" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.615445 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqnq5\" (UniqueName: \"kubernetes.io/projected/eed566a6-01e0-4b0a-9469-4103b45716f4-kube-api-access-dqnq5\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.658814 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eed566a6-01e0-4b0a-9469-4103b45716f4" (UID: "eed566a6-01e0-4b0a-9469-4103b45716f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:08:56 crc kubenswrapper[4826]: I0131 08:08:56.716894 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eed566a6-01e0-4b0a-9469-4103b45716f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:08:57 crc kubenswrapper[4826]: I0131 08:08:57.425630 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkdqf" Jan 31 08:08:57 crc kubenswrapper[4826]: I0131 08:08:57.461750 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkdqf"] Jan 31 08:08:57 crc kubenswrapper[4826]: I0131 08:08:57.469390 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hkdqf"] Jan 31 08:08:58 crc kubenswrapper[4826]: I0131 08:08:58.818569 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" path="/var/lib/kubelet/pods/eed566a6-01e0-4b0a-9469-4103b45716f4/volumes" Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.872858 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.881386 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.888348 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.895287 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.901903 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-77snk"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.908816 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.915165 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blb8r"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.922089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mmgpq"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.928925 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xnlm8"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.935845 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.943324 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.949451 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.955639 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sbw6s"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.962950 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clq8t"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.970089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9q8l"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.976592 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-26n2s"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.982816 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sbw6s"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.988959 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fppv7"] Jan 31 08:09:36 crc kubenswrapper[4826]: I0131 08:09:36.995379 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7"] Jan 31 08:09:37 crc kubenswrapper[4826]: I0131 08:09:37.001312 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dvtj7"] Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.818890 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21086bdf-1c9c-408d-9b67-95b95ccd493d" path="/var/lib/kubelet/pods/21086bdf-1c9c-408d-9b67-95b95ccd493d/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.819823 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e8ee4d-b6b1-40b8-8012-e91226161f75" path="/var/lib/kubelet/pods/34e8ee4d-b6b1-40b8-8012-e91226161f75/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.820473 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4404c1e7-39fa-4591-ae22-a8f0c26a4452" path="/var/lib/kubelet/pods/4404c1e7-39fa-4591-ae22-a8f0c26a4452/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.821008 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455fb360-2d9b-4501-a640-2014364869d8" path="/var/lib/kubelet/pods/455fb360-2d9b-4501-a640-2014364869d8/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.822106 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84a288f-097a-4f5b-acee-09d5c7d34abf" path="/var/lib/kubelet/pods/a84a288f-097a-4f5b-acee-09d5c7d34abf/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.822618 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5a3527-00c3-4ad0-bc80-884479558924" path="/var/lib/kubelet/pods/ad5a3527-00c3-4ad0-bc80-884479558924/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.823145 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bd14ce-ddb1-478c-93d2-e69f2d21972e" path="/var/lib/kubelet/pods/b6bd14ce-ddb1-478c-93d2-e69f2d21972e/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.824077 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d233a435-0ea9-4b37-9293-9fa79cf36cf4" path="/var/lib/kubelet/pods/d233a435-0ea9-4b37-9293-9fa79cf36cf4/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.824641 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23a5f6a-bae5-4579-a340-9a45b0706ba5" path="/var/lib/kubelet/pods/e23a5f6a-bae5-4579-a340-9a45b0706ba5/volumes" Jan 31 08:09:38 crc kubenswrapper[4826]: I0131 08:09:38.825224 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f726af4d-0e38-4afa-bf72-effd72efa5a6" path="/var/lib/kubelet/pods/f726af4d-0e38-4afa-bf72-effd72efa5a6/volumes" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.388932 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls"] Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389714 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="extract-content" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389733 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="extract-content" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389751 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389759 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389775 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="extract-utilities" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389784 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="extract-utilities" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389798 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389806 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389820 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389829 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389855 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="extract-utilities" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389863 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="extract-utilities" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389873 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="extract-content" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389879 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="extract-content" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389892 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="extract-utilities" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389900 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="extract-utilities" Jan 31 08:09:42 crc kubenswrapper[4826]: E0131 08:09:42.389920 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="extract-content" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.389928 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="extract-content" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.390155 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed566a6-01e0-4b0a-9469-4103b45716f4" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.390173 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1cad763-f535-4f78-a67a-b1b3d5ab3383" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.390188 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f335dc-ea2b-4f73-8e00-44eea16bac71" containerName="registry-server" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.390890 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.397451 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.397661 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.397780 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.397999 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.398311 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.406203 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls"] Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.508850 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.509224 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8lh\" (UniqueName: \"kubernetes.io/projected/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-kube-api-access-qj8lh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.509283 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.509395 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.509442 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.611559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.611775 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj8lh\" (UniqueName: \"kubernetes.io/projected/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-kube-api-access-qj8lh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.611814 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.611870 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.611916 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.618105 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.618705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.618750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.619512 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.629033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj8lh\" (UniqueName: \"kubernetes.io/projected/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-kube-api-access-qj8lh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:42 crc kubenswrapper[4826]: I0131 08:09:42.726500 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:09:43 crc kubenswrapper[4826]: I0131 08:09:43.268168 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls"] Jan 31 08:09:43 crc kubenswrapper[4826]: I0131 08:09:43.885985 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" event={"ID":"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347","Type":"ContainerStarted","Data":"e6d7d502173256de148ab1a18b67f25176d257859b9baba82a4596e17c75313c"} Jan 31 08:09:45 crc kubenswrapper[4826]: I0131 08:09:45.906931 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" event={"ID":"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347","Type":"ContainerStarted","Data":"66dad0acade95a2568db2da4f28633c967aa5e553fba55780a3ac8e6ea824740"} Jan 31 08:09:59 crc kubenswrapper[4826]: I0131 08:09:59.117910 4826 generic.go:334] "Generic (PLEG): container finished" podID="9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" containerID="66dad0acade95a2568db2da4f28633c967aa5e553fba55780a3ac8e6ea824740" exitCode=0 Jan 31 08:09:59 crc kubenswrapper[4826]: I0131 08:09:59.118043 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" event={"ID":"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347","Type":"ContainerDied","Data":"66dad0acade95a2568db2da4f28633c967aa5e553fba55780a3ac8e6ea824740"} Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.544066 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.694769 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ssh-key-openstack-edpm-ipam\") pod \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.695048 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj8lh\" (UniqueName: \"kubernetes.io/projected/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-kube-api-access-qj8lh\") pod \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.695174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ceph\") pod \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.695277 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-repo-setup-combined-ca-bundle\") pod \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.695418 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-inventory\") pod \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\" (UID: \"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347\") " Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.701326 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" (UID: "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.701813 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ceph" (OuterVolumeSpecName: "ceph") pod "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" (UID: "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.705700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-kube-api-access-qj8lh" (OuterVolumeSpecName: "kube-api-access-qj8lh") pod "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" (UID: "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347"). InnerVolumeSpecName "kube-api-access-qj8lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.722110 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" (UID: "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.723641 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-inventory" (OuterVolumeSpecName: "inventory") pod "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" (UID: "9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.797075 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.797111 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.797124 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj8lh\" (UniqueName: \"kubernetes.io/projected/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-kube-api-access-qj8lh\") on node \"crc\" DevicePath \"\"" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.797132 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:10:00 crc kubenswrapper[4826]: I0131 08:10:00.797141 4826 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.158789 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.158841 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls" event={"ID":"9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347","Type":"ContainerDied","Data":"e6d7d502173256de148ab1a18b67f25176d257859b9baba82a4596e17c75313c"} Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.159176 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6d7d502173256de148ab1a18b67f25176d257859b9baba82a4596e17c75313c" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.223066 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6"] Jan 31 08:10:01 crc kubenswrapper[4826]: E0131 08:10:01.223523 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.223556 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.223768 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.224603 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.226801 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.227624 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.227934 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.228706 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.229692 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.246894 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6"] Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.306758 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.306859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgh7c\" (UniqueName: \"kubernetes.io/projected/50c05a80-be37-4c98-964a-7503a3a430a2-kube-api-access-cgh7c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.307128 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.307178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.307475 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.409094 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.409393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.409536 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.409628 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.409772 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgh7c\" (UniqueName: \"kubernetes.io/projected/50c05a80-be37-4c98-964a-7503a3a430a2-kube-api-access-cgh7c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.413849 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.414184 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.414399 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.417202 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.433079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgh7c\" (UniqueName: \"kubernetes.io/projected/50c05a80-be37-4c98-964a-7503a3a430a2-kube-api-access-cgh7c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:01 crc kubenswrapper[4826]: I0131 08:10:01.546236 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:10:02 crc kubenswrapper[4826]: I0131 08:10:02.064259 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6"] Jan 31 08:10:02 crc kubenswrapper[4826]: I0131 08:10:02.213069 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" event={"ID":"50c05a80-be37-4c98-964a-7503a3a430a2","Type":"ContainerStarted","Data":"a8c9a27c451050897af7b014ed3b27d56806804fe2a54a94a8cf94f270f73d72"} Jan 31 08:10:04 crc kubenswrapper[4826]: I0131 08:10:04.230880 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" event={"ID":"50c05a80-be37-4c98-964a-7503a3a430a2","Type":"ContainerStarted","Data":"0f2d371316b2afd40637efc987d7cdb18b38fe247b57de3aea732154dc623758"} Jan 31 08:10:04 crc kubenswrapper[4826]: I0131 08:10:04.254615 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" podStartSLOduration=1.990590309 podStartE2EDuration="3.254591355s" podCreationTimestamp="2026-01-31 08:10:01 +0000 UTC" firstStartedPulling="2026-01-31 08:10:02.070733109 +0000 UTC m=+2033.924619468" lastFinishedPulling="2026-01-31 08:10:03.334734155 +0000 UTC m=+2035.188620514" observedRunningTime="2026-01-31 08:10:04.244721395 +0000 UTC m=+2036.098607754" watchObservedRunningTime="2026-01-31 08:10:04.254591355 +0000 UTC m=+2036.108477724" Jan 31 08:10:27 crc kubenswrapper[4826]: I0131 08:10:27.376690 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:10:27 crc kubenswrapper[4826]: I0131 08:10:27.377382 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.164066 4826 scope.go:117] "RemoveContainer" containerID="8628e9b3110df6637b6985bd41ad3ce56819623b33b6b19ccb2e80a0c881a17d" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.204915 4826 scope.go:117] "RemoveContainer" containerID="4c8f80de0e9e5fa9047112eccf97c396914ccd01d33bc1fb5e37033a991354d0" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.263753 4826 scope.go:117] "RemoveContainer" containerID="c54294f595ed189f6604c48a8da10bd9e5d7f1ceac06d74e94ec2eb4abfdbe88" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.297330 4826 scope.go:117] "RemoveContainer" containerID="8799070135d17a153e99dc422f02ba177446522201f6bdbf732206c38742e6c9" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.354416 4826 scope.go:117] "RemoveContainer" containerID="a36090a5d834978450624d390a7ccf2a4cea44d603c5861188f9cd0542202b6f" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.417947 4826 scope.go:117] "RemoveContainer" containerID="2255a62fa3b70bc6a6821bd641820ee21eea584b083ec1359b78f50a96460c0c" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.461875 4826 scope.go:117] "RemoveContainer" containerID="42715357a5b8db10145739808c5fbc2fc66b2d2907039f17b25b35aa498a7219" Jan 31 08:10:32 crc kubenswrapper[4826]: I0131 08:10:32.492953 4826 scope.go:117] "RemoveContainer" containerID="6e374ed0ed726fbc91a9634b665617c78b69f988f2b9f65687cbea96bcc86dfd" Jan 31 08:10:57 crc kubenswrapper[4826]: I0131 08:10:57.377353 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:10:57 crc kubenswrapper[4826]: I0131 08:10:57.378163 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:11:27 crc kubenswrapper[4826]: I0131 08:11:27.377230 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:11:27 crc kubenswrapper[4826]: I0131 08:11:27.377888 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:11:27 crc kubenswrapper[4826]: I0131 08:11:27.377945 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:11:27 crc kubenswrapper[4826]: I0131 08:11:27.378782 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab691b619de71e82ee7b4f8aac2ac93883f05b3da2dd0cdc5dc1c271473c6187"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:11:27 crc kubenswrapper[4826]: I0131 08:11:27.378842 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://ab691b619de71e82ee7b4f8aac2ac93883f05b3da2dd0cdc5dc1c271473c6187" gracePeriod=600 Jan 31 08:11:28 crc kubenswrapper[4826]: I0131 08:11:28.030209 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="ab691b619de71e82ee7b4f8aac2ac93883f05b3da2dd0cdc5dc1c271473c6187" exitCode=0 Jan 31 08:11:28 crc kubenswrapper[4826]: I0131 08:11:28.030252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"ab691b619de71e82ee7b4f8aac2ac93883f05b3da2dd0cdc5dc1c271473c6187"} Jan 31 08:11:28 crc kubenswrapper[4826]: I0131 08:11:28.030284 4826 scope.go:117] "RemoveContainer" containerID="bf69e7db4a04328e91a880deded774211ce3a37179119b4a3e4db7929cefe4ea" Jan 31 08:11:29 crc kubenswrapper[4826]: I0131 08:11:29.040382 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8"} Jan 31 08:11:32 crc kubenswrapper[4826]: I0131 08:11:32.654317 4826 scope.go:117] "RemoveContainer" containerID="22dc4b0a039b5d80b0c1dda03e56a49c3df336b5d03402f6beeec466ee119e69" Jan 31 08:11:32 crc kubenswrapper[4826]: I0131 08:11:32.687465 4826 scope.go:117] "RemoveContainer" containerID="1387c5bcb998ec51467039c29f417d51e53d5911d8d3536271a006b9a9f6f810" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.602296 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cwtbq"] Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.605281 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.618494 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cwtbq"] Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.623297 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-utilities\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.623506 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-catalog-content\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.623536 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srchp\" (UniqueName: \"kubernetes.io/projected/346332d7-b385-472c-965f-77838096d48f-kube-api-access-srchp\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.725337 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-catalog-content\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.725393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srchp\" (UniqueName: \"kubernetes.io/projected/346332d7-b385-472c-965f-77838096d48f-kube-api-access-srchp\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.725433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-utilities\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.726135 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-catalog-content\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.726194 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-utilities\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.751232 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srchp\" (UniqueName: \"kubernetes.io/projected/346332d7-b385-472c-965f-77838096d48f-kube-api-access-srchp\") pod \"community-operators-cwtbq\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:46 crc kubenswrapper[4826]: I0131 08:11:46.935823 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:47 crc kubenswrapper[4826]: I0131 08:11:47.248470 4826 generic.go:334] "Generic (PLEG): container finished" podID="50c05a80-be37-4c98-964a-7503a3a430a2" containerID="0f2d371316b2afd40637efc987d7cdb18b38fe247b57de3aea732154dc623758" exitCode=0 Jan 31 08:11:47 crc kubenswrapper[4826]: I0131 08:11:47.248533 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" event={"ID":"50c05a80-be37-4c98-964a-7503a3a430a2","Type":"ContainerDied","Data":"0f2d371316b2afd40637efc987d7cdb18b38fe247b57de3aea732154dc623758"} Jan 31 08:11:47 crc kubenswrapper[4826]: I0131 08:11:47.462601 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cwtbq"] Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.258519 4826 generic.go:334] "Generic (PLEG): container finished" podID="346332d7-b385-472c-965f-77838096d48f" containerID="221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1" exitCode=0 Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.258585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwtbq" event={"ID":"346332d7-b385-472c-965f-77838096d48f","Type":"ContainerDied","Data":"221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1"} Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.258887 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwtbq" event={"ID":"346332d7-b385-472c-965f-77838096d48f","Type":"ContainerStarted","Data":"36fcf7962065e26836623ba3ae59f2410e689bcb983c104de5c68408959c92eb"} Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.711297 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.865856 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-inventory\") pod \"50c05a80-be37-4c98-964a-7503a3a430a2\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.866000 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ssh-key-openstack-edpm-ipam\") pod \"50c05a80-be37-4c98-964a-7503a3a430a2\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.866027 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgh7c\" (UniqueName: \"kubernetes.io/projected/50c05a80-be37-4c98-964a-7503a3a430a2-kube-api-access-cgh7c\") pod \"50c05a80-be37-4c98-964a-7503a3a430a2\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.866116 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ceph\") pod \"50c05a80-be37-4c98-964a-7503a3a430a2\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.866162 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-bootstrap-combined-ca-bundle\") pod \"50c05a80-be37-4c98-964a-7503a3a430a2\" (UID: \"50c05a80-be37-4c98-964a-7503a3a430a2\") " Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.873376 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "50c05a80-be37-4c98-964a-7503a3a430a2" (UID: "50c05a80-be37-4c98-964a-7503a3a430a2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.873572 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ceph" (OuterVolumeSpecName: "ceph") pod "50c05a80-be37-4c98-964a-7503a3a430a2" (UID: "50c05a80-be37-4c98-964a-7503a3a430a2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.879253 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c05a80-be37-4c98-964a-7503a3a430a2-kube-api-access-cgh7c" (OuterVolumeSpecName: "kube-api-access-cgh7c") pod "50c05a80-be37-4c98-964a-7503a3a430a2" (UID: "50c05a80-be37-4c98-964a-7503a3a430a2"). InnerVolumeSpecName "kube-api-access-cgh7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.892509 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50c05a80-be37-4c98-964a-7503a3a430a2" (UID: "50c05a80-be37-4c98-964a-7503a3a430a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.893875 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-inventory" (OuterVolumeSpecName: "inventory") pod "50c05a80-be37-4c98-964a-7503a3a430a2" (UID: "50c05a80-be37-4c98-964a-7503a3a430a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.970660 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.970701 4826 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.970714 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.970724 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50c05a80-be37-4c98-964a-7503a3a430a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:11:48 crc kubenswrapper[4826]: I0131 08:11:48.970734 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgh7c\" (UniqueName: \"kubernetes.io/projected/50c05a80-be37-4c98-964a-7503a3a430a2-kube-api-access-cgh7c\") on node \"crc\" DevicePath \"\"" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.268676 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" event={"ID":"50c05a80-be37-4c98-964a-7503a3a430a2","Type":"ContainerDied","Data":"a8c9a27c451050897af7b014ed3b27d56806804fe2a54a94a8cf94f270f73d72"} Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.269021 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c9a27c451050897af7b014ed3b27d56806804fe2a54a94a8cf94f270f73d72" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.268738 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.357258 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5"] Jan 31 08:11:49 crc kubenswrapper[4826]: E0131 08:11:49.357742 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c05a80-be37-4c98-964a-7503a3a430a2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.357771 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c05a80-be37-4c98-964a-7503a3a430a2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.358487 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c05a80-be37-4c98-964a-7503a3a430a2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.359654 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.363121 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.363139 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.364389 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.364461 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.364501 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.368768 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5"] Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.509347 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.509507 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g4mt\" (UniqueName: \"kubernetes.io/projected/3cb4eff5-cd38-4f4f-8450-6a8483f52276-kube-api-access-6g4mt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.509556 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.509591 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.611552 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g4mt\" (UniqueName: \"kubernetes.io/projected/3cb4eff5-cd38-4f4f-8450-6a8483f52276-kube-api-access-6g4mt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.611644 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.611769 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.612342 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.618745 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.618782 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.624846 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.628895 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g4mt\" (UniqueName: \"kubernetes.io/projected/3cb4eff5-cd38-4f4f-8450-6a8483f52276-kube-api-access-6g4mt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:49 crc kubenswrapper[4826]: I0131 08:11:49.721579 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:11:50 crc kubenswrapper[4826]: I0131 08:11:50.270057 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5"] Jan 31 08:11:50 crc kubenswrapper[4826]: W0131 08:11:50.279193 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cb4eff5_cd38_4f4f_8450_6a8483f52276.slice/crio-65b0a9be4bce327cb855b464fb699e75effb17ebc46ae8828d1ff01aeeda4bf8 WatchSource:0}: Error finding container 65b0a9be4bce327cb855b464fb699e75effb17ebc46ae8828d1ff01aeeda4bf8: Status 404 returned error can't find the container with id 65b0a9be4bce327cb855b464fb699e75effb17ebc46ae8828d1ff01aeeda4bf8 Jan 31 08:11:50 crc kubenswrapper[4826]: I0131 08:11:50.284298 4826 generic.go:334] "Generic (PLEG): container finished" podID="346332d7-b385-472c-965f-77838096d48f" containerID="d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784" exitCode=0 Jan 31 08:11:50 crc kubenswrapper[4826]: I0131 08:11:50.284433 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwtbq" event={"ID":"346332d7-b385-472c-965f-77838096d48f","Type":"ContainerDied","Data":"d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784"} Jan 31 08:11:51 crc kubenswrapper[4826]: I0131 08:11:51.294934 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" event={"ID":"3cb4eff5-cd38-4f4f-8450-6a8483f52276","Type":"ContainerStarted","Data":"b5de17649106ba422b5073361a812e7cb5e619c8d88952e532df8c6647099369"} Jan 31 08:11:51 crc kubenswrapper[4826]: I0131 08:11:51.295280 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" event={"ID":"3cb4eff5-cd38-4f4f-8450-6a8483f52276","Type":"ContainerStarted","Data":"65b0a9be4bce327cb855b464fb699e75effb17ebc46ae8828d1ff01aeeda4bf8"} Jan 31 08:11:51 crc kubenswrapper[4826]: I0131 08:11:51.297770 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwtbq" event={"ID":"346332d7-b385-472c-965f-77838096d48f","Type":"ContainerStarted","Data":"e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470"} Jan 31 08:11:51 crc kubenswrapper[4826]: I0131 08:11:51.350754 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cwtbq" podStartSLOduration=2.943727176 podStartE2EDuration="5.350735632s" podCreationTimestamp="2026-01-31 08:11:46 +0000 UTC" firstStartedPulling="2026-01-31 08:11:48.26945148 +0000 UTC m=+2140.123337839" lastFinishedPulling="2026-01-31 08:11:50.676459936 +0000 UTC m=+2142.530346295" observedRunningTime="2026-01-31 08:11:51.342417886 +0000 UTC m=+2143.196304265" watchObservedRunningTime="2026-01-31 08:11:51.350735632 +0000 UTC m=+2143.204621991" Jan 31 08:11:51 crc kubenswrapper[4826]: I0131 08:11:51.351181 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" podStartSLOduration=1.882201343 podStartE2EDuration="2.351174225s" podCreationTimestamp="2026-01-31 08:11:49 +0000 UTC" firstStartedPulling="2026-01-31 08:11:50.281345312 +0000 UTC m=+2142.135231661" lastFinishedPulling="2026-01-31 08:11:50.750318184 +0000 UTC m=+2142.604204543" observedRunningTime="2026-01-31 08:11:51.323185969 +0000 UTC m=+2143.177072348" watchObservedRunningTime="2026-01-31 08:11:51.351174225 +0000 UTC m=+2143.205060584" Jan 31 08:11:56 crc kubenswrapper[4826]: I0131 08:11:56.936429 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:56 crc kubenswrapper[4826]: I0131 08:11:56.937105 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:56 crc kubenswrapper[4826]: I0131 08:11:56.979534 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:57 crc kubenswrapper[4826]: I0131 08:11:57.394648 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:11:57 crc kubenswrapper[4826]: I0131 08:11:57.458618 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cwtbq"] Jan 31 08:11:59 crc kubenswrapper[4826]: I0131 08:11:59.362995 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cwtbq" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="registry-server" containerID="cri-o://e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470" gracePeriod=2 Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.355351 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.371336 4826 generic.go:334] "Generic (PLEG): container finished" podID="346332d7-b385-472c-965f-77838096d48f" containerID="e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470" exitCode=0 Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.371385 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwtbq" event={"ID":"346332d7-b385-472c-965f-77838096d48f","Type":"ContainerDied","Data":"e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470"} Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.371414 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwtbq" event={"ID":"346332d7-b385-472c-965f-77838096d48f","Type":"ContainerDied","Data":"36fcf7962065e26836623ba3ae59f2410e689bcb983c104de5c68408959c92eb"} Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.371432 4826 scope.go:117] "RemoveContainer" containerID="e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.371433 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwtbq" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.413910 4826 scope.go:117] "RemoveContainer" containerID="d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.424585 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-utilities\") pod \"346332d7-b385-472c-965f-77838096d48f\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.424694 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srchp\" (UniqueName: \"kubernetes.io/projected/346332d7-b385-472c-965f-77838096d48f-kube-api-access-srchp\") pod \"346332d7-b385-472c-965f-77838096d48f\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.424821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-catalog-content\") pod \"346332d7-b385-472c-965f-77838096d48f\" (UID: \"346332d7-b385-472c-965f-77838096d48f\") " Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.425723 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-utilities" (OuterVolumeSpecName: "utilities") pod "346332d7-b385-472c-965f-77838096d48f" (UID: "346332d7-b385-472c-965f-77838096d48f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.445248 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346332d7-b385-472c-965f-77838096d48f-kube-api-access-srchp" (OuterVolumeSpecName: "kube-api-access-srchp") pod "346332d7-b385-472c-965f-77838096d48f" (UID: "346332d7-b385-472c-965f-77838096d48f"). InnerVolumeSpecName "kube-api-access-srchp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.502126 4826 scope.go:117] "RemoveContainer" containerID="221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.529302 4826 scope.go:117] "RemoveContainer" containerID="e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470" Jan 31 08:12:00 crc kubenswrapper[4826]: E0131 08:12:00.530525 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470\": container with ID starting with e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470 not found: ID does not exist" containerID="e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.530596 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470"} err="failed to get container status \"e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470\": rpc error: code = NotFound desc = could not find container \"e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470\": container with ID starting with e4fd51232caefabf7a336a7562849dec2128d1558cf3f55fcc42d9439a044470 not found: ID does not exist" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.530623 4826 scope.go:117] "RemoveContainer" containerID="d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784" Jan 31 08:12:00 crc kubenswrapper[4826]: E0131 08:12:00.530869 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784\": container with ID starting with d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784 not found: ID does not exist" containerID="d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.530894 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784"} err="failed to get container status \"d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784\": rpc error: code = NotFound desc = could not find container \"d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784\": container with ID starting with d0343b8644dd84447a1662bdcf11324af1ca4e0e34c4d04ea93ca52d39065784 not found: ID does not exist" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.530911 4826 scope.go:117] "RemoveContainer" containerID="221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1" Jan 31 08:12:00 crc kubenswrapper[4826]: E0131 08:12:00.531133 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1\": container with ID starting with 221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1 not found: ID does not exist" containerID="221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.531163 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1"} err="failed to get container status \"221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1\": rpc error: code = NotFound desc = could not find container \"221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1\": container with ID starting with 221498c69198b2158082addeb49164af21329439ec1b7044ae28ab6d15c0acb1 not found: ID does not exist" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.531641 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:00 crc kubenswrapper[4826]: I0131 08:12:00.531669 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srchp\" (UniqueName: \"kubernetes.io/projected/346332d7-b385-472c-965f-77838096d48f-kube-api-access-srchp\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:01 crc kubenswrapper[4826]: I0131 08:12:01.598364 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "346332d7-b385-472c-965f-77838096d48f" (UID: "346332d7-b385-472c-965f-77838096d48f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:12:01 crc kubenswrapper[4826]: I0131 08:12:01.653888 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346332d7-b385-472c-965f-77838096d48f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:01 crc kubenswrapper[4826]: I0131 08:12:01.915450 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cwtbq"] Jan 31 08:12:01 crc kubenswrapper[4826]: I0131 08:12:01.926472 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cwtbq"] Jan 31 08:12:02 crc kubenswrapper[4826]: I0131 08:12:02.820301 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346332d7-b385-472c-965f-77838096d48f" path="/var/lib/kubelet/pods/346332d7-b385-472c-965f-77838096d48f/volumes" Jan 31 08:12:15 crc kubenswrapper[4826]: I0131 08:12:15.526248 4826 generic.go:334] "Generic (PLEG): container finished" podID="3cb4eff5-cd38-4f4f-8450-6a8483f52276" containerID="b5de17649106ba422b5073361a812e7cb5e619c8d88952e532df8c6647099369" exitCode=0 Jan 31 08:12:15 crc kubenswrapper[4826]: I0131 08:12:15.526353 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" event={"ID":"3cb4eff5-cd38-4f4f-8450-6a8483f52276","Type":"ContainerDied","Data":"b5de17649106ba422b5073361a812e7cb5e619c8d88952e532df8c6647099369"} Jan 31 08:12:16 crc kubenswrapper[4826]: I0131 08:12:16.983033 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.143692 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ssh-key-openstack-edpm-ipam\") pod \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.143897 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ceph\") pod \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.144064 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4mt\" (UniqueName: \"kubernetes.io/projected/3cb4eff5-cd38-4f4f-8450-6a8483f52276-kube-api-access-6g4mt\") pod \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.144874 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-inventory\") pod \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\" (UID: \"3cb4eff5-cd38-4f4f-8450-6a8483f52276\") " Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.151015 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb4eff5-cd38-4f4f-8450-6a8483f52276-kube-api-access-6g4mt" (OuterVolumeSpecName: "kube-api-access-6g4mt") pod "3cb4eff5-cd38-4f4f-8450-6a8483f52276" (UID: "3cb4eff5-cd38-4f4f-8450-6a8483f52276"). InnerVolumeSpecName "kube-api-access-6g4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.153191 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ceph" (OuterVolumeSpecName: "ceph") pod "3cb4eff5-cd38-4f4f-8450-6a8483f52276" (UID: "3cb4eff5-cd38-4f4f-8450-6a8483f52276"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.172949 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-inventory" (OuterVolumeSpecName: "inventory") pod "3cb4eff5-cd38-4f4f-8450-6a8483f52276" (UID: "3cb4eff5-cd38-4f4f-8450-6a8483f52276"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.173234 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3cb4eff5-cd38-4f4f-8450-6a8483f52276" (UID: "3cb4eff5-cd38-4f4f-8450-6a8483f52276"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.247509 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.247544 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.247555 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3cb4eff5-cd38-4f4f-8450-6a8483f52276-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.247567 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g4mt\" (UniqueName: \"kubernetes.io/projected/3cb4eff5-cd38-4f4f-8450-6a8483f52276-kube-api-access-6g4mt\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.552193 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" event={"ID":"3cb4eff5-cd38-4f4f-8450-6a8483f52276","Type":"ContainerDied","Data":"65b0a9be4bce327cb855b464fb699e75effb17ebc46ae8828d1ff01aeeda4bf8"} Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.552495 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b0a9be4bce327cb855b464fb699e75effb17ebc46ae8828d1ff01aeeda4bf8" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.552269 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.625868 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg"] Jan 31 08:12:17 crc kubenswrapper[4826]: E0131 08:12:17.626277 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="extract-content" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.626298 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="extract-content" Jan 31 08:12:17 crc kubenswrapper[4826]: E0131 08:12:17.626328 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="extract-utilities" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.626340 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="extract-utilities" Jan 31 08:12:17 crc kubenswrapper[4826]: E0131 08:12:17.626362 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb4eff5-cd38-4f4f-8450-6a8483f52276" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.626376 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb4eff5-cd38-4f4f-8450-6a8483f52276" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:17 crc kubenswrapper[4826]: E0131 08:12:17.626399 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="registry-server" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.626409 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="registry-server" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.626600 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb4eff5-cd38-4f4f-8450-6a8483f52276" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.626649 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="346332d7-b385-472c-965f-77838096d48f" containerName="registry-server" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.627260 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.629889 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.630170 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.630730 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.631139 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.636176 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.641223 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg"] Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.657306 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8gkd\" (UniqueName: \"kubernetes.io/projected/e2555a90-e246-4467-a217-abd3841e3441-kube-api-access-p8gkd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.657385 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.657516 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.657592 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.759717 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8gkd\" (UniqueName: \"kubernetes.io/projected/e2555a90-e246-4467-a217-abd3841e3441-kube-api-access-p8gkd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.759774 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.759832 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.759886 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.764722 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.769379 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.770639 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.778356 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8gkd\" (UniqueName: \"kubernetes.io/projected/e2555a90-e246-4467-a217-abd3841e3441-kube-api-access-p8gkd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-prgwg\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:17 crc kubenswrapper[4826]: I0131 08:12:17.950884 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:18 crc kubenswrapper[4826]: I0131 08:12:18.485655 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg"] Jan 31 08:12:18 crc kubenswrapper[4826]: I0131 08:12:18.561321 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" event={"ID":"e2555a90-e246-4467-a217-abd3841e3441","Type":"ContainerStarted","Data":"3a219e28951620cde35e2301bad1550ebc3d61db2751bb35f6ccac057a6ab256"} Jan 31 08:12:19 crc kubenswrapper[4826]: I0131 08:12:19.570753 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" event={"ID":"e2555a90-e246-4467-a217-abd3841e3441","Type":"ContainerStarted","Data":"5265220b899bc638d82df7fc122d265e94a90492a19e5431416a2b6a23a96064"} Jan 31 08:12:19 crc kubenswrapper[4826]: I0131 08:12:19.590926 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" podStartSLOduration=1.981283516 podStartE2EDuration="2.590906769s" podCreationTimestamp="2026-01-31 08:12:17 +0000 UTC" firstStartedPulling="2026-01-31 08:12:18.495293008 +0000 UTC m=+2170.349179367" lastFinishedPulling="2026-01-31 08:12:19.104916261 +0000 UTC m=+2170.958802620" observedRunningTime="2026-01-31 08:12:19.587940464 +0000 UTC m=+2171.441826833" watchObservedRunningTime="2026-01-31 08:12:19.590906769 +0000 UTC m=+2171.444793128" Jan 31 08:12:24 crc kubenswrapper[4826]: I0131 08:12:24.618772 4826 generic.go:334] "Generic (PLEG): container finished" podID="e2555a90-e246-4467-a217-abd3841e3441" containerID="5265220b899bc638d82df7fc122d265e94a90492a19e5431416a2b6a23a96064" exitCode=0 Jan 31 08:12:24 crc kubenswrapper[4826]: I0131 08:12:24.618860 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" event={"ID":"e2555a90-e246-4467-a217-abd3841e3441","Type":"ContainerDied","Data":"5265220b899bc638d82df7fc122d265e94a90492a19e5431416a2b6a23a96064"} Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.059681 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.130590 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ssh-key-openstack-edpm-ipam\") pod \"e2555a90-e246-4467-a217-abd3841e3441\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.130672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-inventory\") pod \"e2555a90-e246-4467-a217-abd3841e3441\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.130769 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8gkd\" (UniqueName: \"kubernetes.io/projected/e2555a90-e246-4467-a217-abd3841e3441-kube-api-access-p8gkd\") pod \"e2555a90-e246-4467-a217-abd3841e3441\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.130804 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ceph\") pod \"e2555a90-e246-4467-a217-abd3841e3441\" (UID: \"e2555a90-e246-4467-a217-abd3841e3441\") " Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.143143 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ceph" (OuterVolumeSpecName: "ceph") pod "e2555a90-e246-4467-a217-abd3841e3441" (UID: "e2555a90-e246-4467-a217-abd3841e3441"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.155181 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2555a90-e246-4467-a217-abd3841e3441-kube-api-access-p8gkd" (OuterVolumeSpecName: "kube-api-access-p8gkd") pod "e2555a90-e246-4467-a217-abd3841e3441" (UID: "e2555a90-e246-4467-a217-abd3841e3441"). InnerVolumeSpecName "kube-api-access-p8gkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.215852 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e2555a90-e246-4467-a217-abd3841e3441" (UID: "e2555a90-e246-4467-a217-abd3841e3441"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.218341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-inventory" (OuterVolumeSpecName: "inventory") pod "e2555a90-e246-4467-a217-abd3841e3441" (UID: "e2555a90-e246-4467-a217-abd3841e3441"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.233796 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.233841 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.233854 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8gkd\" (UniqueName: \"kubernetes.io/projected/e2555a90-e246-4467-a217-abd3841e3441-kube-api-access-p8gkd\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.233865 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2555a90-e246-4467-a217-abd3841e3441-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.634713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" event={"ID":"e2555a90-e246-4467-a217-abd3841e3441","Type":"ContainerDied","Data":"3a219e28951620cde35e2301bad1550ebc3d61db2751bb35f6ccac057a6ab256"} Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.634755 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a219e28951620cde35e2301bad1550ebc3d61db2751bb35f6ccac057a6ab256" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.634766 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-prgwg" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.712285 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8"] Jan 31 08:12:26 crc kubenswrapper[4826]: E0131 08:12:26.712739 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2555a90-e246-4467-a217-abd3841e3441" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.712761 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2555a90-e246-4467-a217-abd3841e3441" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.712942 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2555a90-e246-4467-a217-abd3841e3441" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.713542 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.715652 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.716471 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.716775 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.717309 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.717432 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.729383 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8"] Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.741912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.742058 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5z6\" (UniqueName: \"kubernetes.io/projected/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-kube-api-access-8x5z6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.742149 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.742488 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.844004 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.844125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5z6\" (UniqueName: \"kubernetes.io/projected/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-kube-api-access-8x5z6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.844185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.844296 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.849345 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.849753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.850763 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:26 crc kubenswrapper[4826]: I0131 08:12:26.863765 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5z6\" (UniqueName: \"kubernetes.io/projected/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-kube-api-access-8x5z6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9zll8\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:27 crc kubenswrapper[4826]: I0131 08:12:27.030553 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:12:27 crc kubenswrapper[4826]: I0131 08:12:27.553481 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8"] Jan 31 08:12:27 crc kubenswrapper[4826]: I0131 08:12:27.648255 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" event={"ID":"517832e1-875d-49f4-8e81-7aa6c6b9a7f9","Type":"ContainerStarted","Data":"2ce0f416b3fd1a35a5b7a5b1e1a834b9e3e442b89b0d0dcea77f13adcb317b11"} Jan 31 08:12:28 crc kubenswrapper[4826]: I0131 08:12:28.657855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" event={"ID":"517832e1-875d-49f4-8e81-7aa6c6b9a7f9","Type":"ContainerStarted","Data":"74583a58d3a45c8a3b30d67130638047cab5697dc0785b3385974cf8e7d76522"} Jan 31 08:12:28 crc kubenswrapper[4826]: I0131 08:12:28.687420 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" podStartSLOduration=2.265357292 podStartE2EDuration="2.687397112s" podCreationTimestamp="2026-01-31 08:12:26 +0000 UTC" firstStartedPulling="2026-01-31 08:12:27.560056599 +0000 UTC m=+2179.413942958" lastFinishedPulling="2026-01-31 08:12:27.982096419 +0000 UTC m=+2179.835982778" observedRunningTime="2026-01-31 08:12:28.674792044 +0000 UTC m=+2180.528678423" watchObservedRunningTime="2026-01-31 08:12:28.687397112 +0000 UTC m=+2180.541283481" Jan 31 08:13:01 crc kubenswrapper[4826]: I0131 08:13:01.943217 4826 generic.go:334] "Generic (PLEG): container finished" podID="517832e1-875d-49f4-8e81-7aa6c6b9a7f9" containerID="74583a58d3a45c8a3b30d67130638047cab5697dc0785b3385974cf8e7d76522" exitCode=0 Jan 31 08:13:01 crc kubenswrapper[4826]: I0131 08:13:01.943299 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" event={"ID":"517832e1-875d-49f4-8e81-7aa6c6b9a7f9","Type":"ContainerDied","Data":"74583a58d3a45c8a3b30d67130638047cab5697dc0785b3385974cf8e7d76522"} Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.440018 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.569955 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x5z6\" (UniqueName: \"kubernetes.io/projected/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-kube-api-access-8x5z6\") pod \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.570347 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ceph\") pod \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.570428 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-inventory\") pod \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.570569 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ssh-key-openstack-edpm-ipam\") pod \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\" (UID: \"517832e1-875d-49f4-8e81-7aa6c6b9a7f9\") " Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.576671 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ceph" (OuterVolumeSpecName: "ceph") pod "517832e1-875d-49f4-8e81-7aa6c6b9a7f9" (UID: "517832e1-875d-49f4-8e81-7aa6c6b9a7f9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.578139 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-kube-api-access-8x5z6" (OuterVolumeSpecName: "kube-api-access-8x5z6") pod "517832e1-875d-49f4-8e81-7aa6c6b9a7f9" (UID: "517832e1-875d-49f4-8e81-7aa6c6b9a7f9"). InnerVolumeSpecName "kube-api-access-8x5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.598923 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "517832e1-875d-49f4-8e81-7aa6c6b9a7f9" (UID: "517832e1-875d-49f4-8e81-7aa6c6b9a7f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.604649 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-inventory" (OuterVolumeSpecName: "inventory") pod "517832e1-875d-49f4-8e81-7aa6c6b9a7f9" (UID: "517832e1-875d-49f4-8e81-7aa6c6b9a7f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.672221 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.672255 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x5z6\" (UniqueName: \"kubernetes.io/projected/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-kube-api-access-8x5z6\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.672265 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.672274 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517832e1-875d-49f4-8e81-7aa6c6b9a7f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.963086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" event={"ID":"517832e1-875d-49f4-8e81-7aa6c6b9a7f9","Type":"ContainerDied","Data":"2ce0f416b3fd1a35a5b7a5b1e1a834b9e3e442b89b0d0dcea77f13adcb317b11"} Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.963153 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce0f416b3fd1a35a5b7a5b1e1a834b9e3e442b89b0d0dcea77f13adcb317b11" Jan 31 08:13:03 crc kubenswrapper[4826]: I0131 08:13:03.963454 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9zll8" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.052510 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp"] Jan 31 08:13:04 crc kubenswrapper[4826]: E0131 08:13:04.052916 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517832e1-875d-49f4-8e81-7aa6c6b9a7f9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.052932 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="517832e1-875d-49f4-8e81-7aa6c6b9a7f9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.053129 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="517832e1-875d-49f4-8e81-7aa6c6b9a7f9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.053710 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.058484 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.058873 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.059042 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.059761 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.059894 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.061389 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp"] Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.189285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n544\" (UniqueName: \"kubernetes.io/projected/ad853f3b-9633-4c89-baa0-0fa82a9498d7-kube-api-access-6n544\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.189344 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.189710 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.189900 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.291898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.292021 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.292102 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n544\" (UniqueName: \"kubernetes.io/projected/ad853f3b-9633-4c89-baa0-0fa82a9498d7-kube-api-access-6n544\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.292136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.295422 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.295503 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.295988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.314852 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n544\" (UniqueName: \"kubernetes.io/projected/ad853f3b-9633-4c89-baa0-0fa82a9498d7-kube-api-access-6n544\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.403458 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.939106 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp"] Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.969521 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:13:04 crc kubenswrapper[4826]: I0131 08:13:04.974602 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" event={"ID":"ad853f3b-9633-4c89-baa0-0fa82a9498d7","Type":"ContainerStarted","Data":"bed975328620243701e476df8737a67725726ccadd435154257ab8a4529dd0cb"} Jan 31 08:13:06 crc kubenswrapper[4826]: I0131 08:13:06.995724 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" event={"ID":"ad853f3b-9633-4c89-baa0-0fa82a9498d7","Type":"ContainerStarted","Data":"8fc7dc4522520bf92429202d2c0fd0b20f33b26024a47e2c3aba61edb84211bf"} Jan 31 08:13:07 crc kubenswrapper[4826]: I0131 08:13:07.020623 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" podStartSLOduration=1.687857373 podStartE2EDuration="3.020607956s" podCreationTimestamp="2026-01-31 08:13:04 +0000 UTC" firstStartedPulling="2026-01-31 08:13:04.969248291 +0000 UTC m=+2216.823134650" lastFinishedPulling="2026-01-31 08:13:06.301998874 +0000 UTC m=+2218.155885233" observedRunningTime="2026-01-31 08:13:07.013559435 +0000 UTC m=+2218.867445794" watchObservedRunningTime="2026-01-31 08:13:07.020607956 +0000 UTC m=+2218.874494305" Jan 31 08:13:11 crc kubenswrapper[4826]: I0131 08:13:11.029564 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad853f3b-9633-4c89-baa0-0fa82a9498d7" containerID="8fc7dc4522520bf92429202d2c0fd0b20f33b26024a47e2c3aba61edb84211bf" exitCode=0 Jan 31 08:13:11 crc kubenswrapper[4826]: I0131 08:13:11.029689 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" event={"ID":"ad853f3b-9633-4c89-baa0-0fa82a9498d7","Type":"ContainerDied","Data":"8fc7dc4522520bf92429202d2c0fd0b20f33b26024a47e2c3aba61edb84211bf"} Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.442473 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.543950 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ceph\") pod \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.544151 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ssh-key-openstack-edpm-ipam\") pod \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.544206 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-inventory\") pod \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.544250 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n544\" (UniqueName: \"kubernetes.io/projected/ad853f3b-9633-4c89-baa0-0fa82a9498d7-kube-api-access-6n544\") pod \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\" (UID: \"ad853f3b-9633-4c89-baa0-0fa82a9498d7\") " Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.553065 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ceph" (OuterVolumeSpecName: "ceph") pod "ad853f3b-9633-4c89-baa0-0fa82a9498d7" (UID: "ad853f3b-9633-4c89-baa0-0fa82a9498d7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.556351 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad853f3b-9633-4c89-baa0-0fa82a9498d7-kube-api-access-6n544" (OuterVolumeSpecName: "kube-api-access-6n544") pod "ad853f3b-9633-4c89-baa0-0fa82a9498d7" (UID: "ad853f3b-9633-4c89-baa0-0fa82a9498d7"). InnerVolumeSpecName "kube-api-access-6n544". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.574788 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-inventory" (OuterVolumeSpecName: "inventory") pod "ad853f3b-9633-4c89-baa0-0fa82a9498d7" (UID: "ad853f3b-9633-4c89-baa0-0fa82a9498d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.575213 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad853f3b-9633-4c89-baa0-0fa82a9498d7" (UID: "ad853f3b-9633-4c89-baa0-0fa82a9498d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.646632 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.646673 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.646686 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n544\" (UniqueName: \"kubernetes.io/projected/ad853f3b-9633-4c89-baa0-0fa82a9498d7-kube-api-access-6n544\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:12 crc kubenswrapper[4826]: I0131 08:13:12.646695 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad853f3b-9633-4c89-baa0-0fa82a9498d7-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.047293 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" event={"ID":"ad853f3b-9633-4c89-baa0-0fa82a9498d7","Type":"ContainerDied","Data":"bed975328620243701e476df8737a67725726ccadd435154257ab8a4529dd0cb"} Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.047340 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed975328620243701e476df8737a67725726ccadd435154257ab8a4529dd0cb" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.047367 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.115437 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg"] Jan 31 08:13:13 crc kubenswrapper[4826]: E0131 08:13:13.116689 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad853f3b-9633-4c89-baa0-0fa82a9498d7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.116718 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad853f3b-9633-4c89-baa0-0fa82a9498d7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.117018 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad853f3b-9633-4c89-baa0-0fa82a9498d7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.119336 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.123427 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.123490 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.123538 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.123749 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.123809 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.126600 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg"] Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.259237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.259431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.259584 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9dd\" (UniqueName: \"kubernetes.io/projected/1a82d52f-625e-48d8-b546-9acf6922cbd0-kube-api-access-sq9dd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.259838 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.361154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9dd\" (UniqueName: \"kubernetes.io/projected/1a82d52f-625e-48d8-b546-9acf6922cbd0-kube-api-access-sq9dd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.361256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.361360 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.361423 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.366657 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.368038 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.372276 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.379526 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9dd\" (UniqueName: \"kubernetes.io/projected/1a82d52f-625e-48d8-b546-9acf6922cbd0-kube-api-access-sq9dd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.440338 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:13 crc kubenswrapper[4826]: I0131 08:13:13.968349 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg"] Jan 31 08:13:13 crc kubenswrapper[4826]: W0131 08:13:13.974595 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a82d52f_625e_48d8_b546_9acf6922cbd0.slice/crio-527c1676c3ce3c3c969897d011d23738ee08ec43d64e5c934374729ba99d9d47 WatchSource:0}: Error finding container 527c1676c3ce3c3c969897d011d23738ee08ec43d64e5c934374729ba99d9d47: Status 404 returned error can't find the container with id 527c1676c3ce3c3c969897d011d23738ee08ec43d64e5c934374729ba99d9d47 Jan 31 08:13:14 crc kubenswrapper[4826]: I0131 08:13:14.060020 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" event={"ID":"1a82d52f-625e-48d8-b546-9acf6922cbd0","Type":"ContainerStarted","Data":"527c1676c3ce3c3c969897d011d23738ee08ec43d64e5c934374729ba99d9d47"} Jan 31 08:13:16 crc kubenswrapper[4826]: I0131 08:13:16.078701 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" event={"ID":"1a82d52f-625e-48d8-b546-9acf6922cbd0","Type":"ContainerStarted","Data":"3efb47d89ffd3c20a20b61dcdf2aff5bf293aa52ca9ea049554293f9fdcae4d3"} Jan 31 08:13:16 crc kubenswrapper[4826]: I0131 08:13:16.094796 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" podStartSLOduration=1.380078992 podStartE2EDuration="3.094776494s" podCreationTimestamp="2026-01-31 08:13:13 +0000 UTC" firstStartedPulling="2026-01-31 08:13:13.994233871 +0000 UTC m=+2225.848120230" lastFinishedPulling="2026-01-31 08:13:15.708931373 +0000 UTC m=+2227.562817732" observedRunningTime="2026-01-31 08:13:16.094036313 +0000 UTC m=+2227.947922692" watchObservedRunningTime="2026-01-31 08:13:16.094776494 +0000 UTC m=+2227.948662853" Jan 31 08:13:53 crc kubenswrapper[4826]: I0131 08:13:53.406611 4826 generic.go:334] "Generic (PLEG): container finished" podID="1a82d52f-625e-48d8-b546-9acf6922cbd0" containerID="3efb47d89ffd3c20a20b61dcdf2aff5bf293aa52ca9ea049554293f9fdcae4d3" exitCode=0 Jan 31 08:13:53 crc kubenswrapper[4826]: I0131 08:13:53.406722 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" event={"ID":"1a82d52f-625e-48d8-b546-9acf6922cbd0","Type":"ContainerDied","Data":"3efb47d89ffd3c20a20b61dcdf2aff5bf293aa52ca9ea049554293f9fdcae4d3"} Jan 31 08:13:54 crc kubenswrapper[4826]: I0131 08:13:54.792164 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:54 crc kubenswrapper[4826]: I0131 08:13:54.995234 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ceph\") pod \"1a82d52f-625e-48d8-b546-9acf6922cbd0\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " Jan 31 08:13:54 crc kubenswrapper[4826]: I0131 08:13:54.995299 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ssh-key-openstack-edpm-ipam\") pod \"1a82d52f-625e-48d8-b546-9acf6922cbd0\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " Jan 31 08:13:54 crc kubenswrapper[4826]: I0131 08:13:54.995393 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9dd\" (UniqueName: \"kubernetes.io/projected/1a82d52f-625e-48d8-b546-9acf6922cbd0-kube-api-access-sq9dd\") pod \"1a82d52f-625e-48d8-b546-9acf6922cbd0\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " Jan 31 08:13:54 crc kubenswrapper[4826]: I0131 08:13:54.995455 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-inventory\") pod \"1a82d52f-625e-48d8-b546-9acf6922cbd0\" (UID: \"1a82d52f-625e-48d8-b546-9acf6922cbd0\") " Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.004229 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a82d52f-625e-48d8-b546-9acf6922cbd0-kube-api-access-sq9dd" (OuterVolumeSpecName: "kube-api-access-sq9dd") pod "1a82d52f-625e-48d8-b546-9acf6922cbd0" (UID: "1a82d52f-625e-48d8-b546-9acf6922cbd0"). InnerVolumeSpecName "kube-api-access-sq9dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.004275 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ceph" (OuterVolumeSpecName: "ceph") pod "1a82d52f-625e-48d8-b546-9acf6922cbd0" (UID: "1a82d52f-625e-48d8-b546-9acf6922cbd0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.022934 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a82d52f-625e-48d8-b546-9acf6922cbd0" (UID: "1a82d52f-625e-48d8-b546-9acf6922cbd0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.032174 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-inventory" (OuterVolumeSpecName: "inventory") pod "1a82d52f-625e-48d8-b546-9acf6922cbd0" (UID: "1a82d52f-625e-48d8-b546-9acf6922cbd0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.097750 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.098118 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.098135 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq9dd\" (UniqueName: \"kubernetes.io/projected/1a82d52f-625e-48d8-b546-9acf6922cbd0-kube-api-access-sq9dd\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.098146 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a82d52f-625e-48d8-b546-9acf6922cbd0-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.425326 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" event={"ID":"1a82d52f-625e-48d8-b546-9acf6922cbd0","Type":"ContainerDied","Data":"527c1676c3ce3c3c969897d011d23738ee08ec43d64e5c934374729ba99d9d47"} Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.425371 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527c1676c3ce3c3c969897d011d23738ee08ec43d64e5c934374729ba99d9d47" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.425398 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.518601 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4gqmw"] Jan 31 08:13:55 crc kubenswrapper[4826]: E0131 08:13:55.519137 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a82d52f-625e-48d8-b546-9acf6922cbd0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.519161 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a82d52f-625e-48d8-b546-9acf6922cbd0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.519375 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a82d52f-625e-48d8-b546-9acf6922cbd0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.520156 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.522182 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.522418 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.523061 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.525609 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.525656 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.530711 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4gqmw"] Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.605851 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.605990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ceph\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.606085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.606115 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzch4\" (UniqueName: \"kubernetes.io/projected/6a722cc4-6eab-4740-9776-6cb0ba8e1575-kube-api-access-pzch4\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.707904 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.708051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ceph\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.708170 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.708211 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzch4\" (UniqueName: \"kubernetes.io/projected/6a722cc4-6eab-4740-9776-6cb0ba8e1575-kube-api-access-pzch4\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.714209 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.714219 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.714617 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ceph\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.726875 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzch4\" (UniqueName: \"kubernetes.io/projected/6a722cc4-6eab-4740-9776-6cb0ba8e1575-kube-api-access-pzch4\") pod \"ssh-known-hosts-edpm-deployment-4gqmw\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:55 crc kubenswrapper[4826]: I0131 08:13:55.836848 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:13:56 crc kubenswrapper[4826]: I0131 08:13:56.360494 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4gqmw"] Jan 31 08:13:56 crc kubenswrapper[4826]: I0131 08:13:56.435175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" event={"ID":"6a722cc4-6eab-4740-9776-6cb0ba8e1575","Type":"ContainerStarted","Data":"defe60e2120e4f773bcf037f3c2c84a5d838ba75b4651c4ee9a6230cf709ef19"} Jan 31 08:13:57 crc kubenswrapper[4826]: I0131 08:13:57.377281 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:13:57 crc kubenswrapper[4826]: I0131 08:13:57.377624 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:13:57 crc kubenswrapper[4826]: I0131 08:13:57.447794 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" event={"ID":"6a722cc4-6eab-4740-9776-6cb0ba8e1575","Type":"ContainerStarted","Data":"d78482518a295e57698271c32bcab3efbf3ebb060edc39d4eb3bcacdab7401aa"} Jan 31 08:13:57 crc kubenswrapper[4826]: I0131 08:13:57.473959 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" podStartSLOduration=2.045748383 podStartE2EDuration="2.473936927s" podCreationTimestamp="2026-01-31 08:13:55 +0000 UTC" firstStartedPulling="2026-01-31 08:13:56.379336015 +0000 UTC m=+2268.233222364" lastFinishedPulling="2026-01-31 08:13:56.807524559 +0000 UTC m=+2268.661410908" observedRunningTime="2026-01-31 08:13:57.466928148 +0000 UTC m=+2269.320814577" watchObservedRunningTime="2026-01-31 08:13:57.473936927 +0000 UTC m=+2269.327823306" Jan 31 08:14:05 crc kubenswrapper[4826]: I0131 08:14:05.519766 4826 generic.go:334] "Generic (PLEG): container finished" podID="6a722cc4-6eab-4740-9776-6cb0ba8e1575" containerID="d78482518a295e57698271c32bcab3efbf3ebb060edc39d4eb3bcacdab7401aa" exitCode=0 Jan 31 08:14:05 crc kubenswrapper[4826]: I0131 08:14:05.519855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" event={"ID":"6a722cc4-6eab-4740-9776-6cb0ba8e1575","Type":"ContainerDied","Data":"d78482518a295e57698271c32bcab3efbf3ebb060edc39d4eb3bcacdab7401aa"} Jan 31 08:14:06 crc kubenswrapper[4826]: I0131 08:14:06.930528 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.121658 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-inventory-0\") pod \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.122194 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ssh-key-openstack-edpm-ipam\") pod \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.122373 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ceph\") pod \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.122433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzch4\" (UniqueName: \"kubernetes.io/projected/6a722cc4-6eab-4740-9776-6cb0ba8e1575-kube-api-access-pzch4\") pod \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\" (UID: \"6a722cc4-6eab-4740-9776-6cb0ba8e1575\") " Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.128092 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ceph" (OuterVolumeSpecName: "ceph") pod "6a722cc4-6eab-4740-9776-6cb0ba8e1575" (UID: "6a722cc4-6eab-4740-9776-6cb0ba8e1575"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.128420 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a722cc4-6eab-4740-9776-6cb0ba8e1575-kube-api-access-pzch4" (OuterVolumeSpecName: "kube-api-access-pzch4") pod "6a722cc4-6eab-4740-9776-6cb0ba8e1575" (UID: "6a722cc4-6eab-4740-9776-6cb0ba8e1575"). InnerVolumeSpecName "kube-api-access-pzch4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.147869 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a722cc4-6eab-4740-9776-6cb0ba8e1575" (UID: "6a722cc4-6eab-4740-9776-6cb0ba8e1575"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.147987 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6a722cc4-6eab-4740-9776-6cb0ba8e1575" (UID: "6a722cc4-6eab-4740-9776-6cb0ba8e1575"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.224584 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.224624 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzch4\" (UniqueName: \"kubernetes.io/projected/6a722cc4-6eab-4740-9776-6cb0ba8e1575-kube-api-access-pzch4\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.224635 4826 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.224653 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a722cc4-6eab-4740-9776-6cb0ba8e1575-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.537001 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" event={"ID":"6a722cc4-6eab-4740-9776-6cb0ba8e1575","Type":"ContainerDied","Data":"defe60e2120e4f773bcf037f3c2c84a5d838ba75b4651c4ee9a6230cf709ef19"} Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.537044 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="defe60e2120e4f773bcf037f3c2c84a5d838ba75b4651c4ee9a6230cf709ef19" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.537067 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4gqmw" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.590736 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k"] Jan 31 08:14:07 crc kubenswrapper[4826]: E0131 08:14:07.591113 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a722cc4-6eab-4740-9776-6cb0ba8e1575" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.591128 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a722cc4-6eab-4740-9776-6cb0ba8e1575" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.591295 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a722cc4-6eab-4740-9776-6cb0ba8e1575" containerName="ssh-known-hosts-edpm-deployment" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.591885 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.596407 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.596634 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.596796 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.601443 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.601443 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.604183 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k"] Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.733325 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.733386 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhsc\" (UniqueName: \"kubernetes.io/projected/c74306ca-1c03-4b19-b9cf-173122cdada0-kube-api-access-ddhsc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.733529 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.733656 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.835005 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.835077 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.835103 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhsc\" (UniqueName: \"kubernetes.io/projected/c74306ca-1c03-4b19-b9cf-173122cdada0-kube-api-access-ddhsc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.835208 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.840794 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.840928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.849062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.852918 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhsc\" (UniqueName: \"kubernetes.io/projected/c74306ca-1c03-4b19-b9cf-173122cdada0-kube-api-access-ddhsc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qpp6k\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:07 crc kubenswrapper[4826]: I0131 08:14:07.908234 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:08 crc kubenswrapper[4826]: I0131 08:14:08.411855 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k"] Jan 31 08:14:08 crc kubenswrapper[4826]: W0131 08:14:08.417243 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc74306ca_1c03_4b19_b9cf_173122cdada0.slice/crio-70df3a8bd2d5582cde6140e28602513cdd74fab9686d8f472f2b338c7167f4a7 WatchSource:0}: Error finding container 70df3a8bd2d5582cde6140e28602513cdd74fab9686d8f472f2b338c7167f4a7: Status 404 returned error can't find the container with id 70df3a8bd2d5582cde6140e28602513cdd74fab9686d8f472f2b338c7167f4a7 Jan 31 08:14:08 crc kubenswrapper[4826]: I0131 08:14:08.551514 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" event={"ID":"c74306ca-1c03-4b19-b9cf-173122cdada0","Type":"ContainerStarted","Data":"70df3a8bd2d5582cde6140e28602513cdd74fab9686d8f472f2b338c7167f4a7"} Jan 31 08:14:08 crc kubenswrapper[4826]: I0131 08:14:08.895325 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:14:09 crc kubenswrapper[4826]: I0131 08:14:09.562914 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" event={"ID":"c74306ca-1c03-4b19-b9cf-173122cdada0","Type":"ContainerStarted","Data":"75c5bb0e36073f50aa651f99b630b17c751223b83b4775d6112edb59dbe55b68"} Jan 31 08:14:09 crc kubenswrapper[4826]: I0131 08:14:09.582958 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" podStartSLOduration=2.110068486 podStartE2EDuration="2.58293627s" podCreationTimestamp="2026-01-31 08:14:07 +0000 UTC" firstStartedPulling="2026-01-31 08:14:08.420198522 +0000 UTC m=+2280.274084881" lastFinishedPulling="2026-01-31 08:14:08.893066316 +0000 UTC m=+2280.746952665" observedRunningTime="2026-01-31 08:14:09.577228888 +0000 UTC m=+2281.431115257" watchObservedRunningTime="2026-01-31 08:14:09.58293627 +0000 UTC m=+2281.436822629" Jan 31 08:14:16 crc kubenswrapper[4826]: I0131 08:14:16.638171 4826 generic.go:334] "Generic (PLEG): container finished" podID="c74306ca-1c03-4b19-b9cf-173122cdada0" containerID="75c5bb0e36073f50aa651f99b630b17c751223b83b4775d6112edb59dbe55b68" exitCode=0 Jan 31 08:14:16 crc kubenswrapper[4826]: I0131 08:14:16.638292 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" event={"ID":"c74306ca-1c03-4b19-b9cf-173122cdada0","Type":"ContainerDied","Data":"75c5bb0e36073f50aa651f99b630b17c751223b83b4775d6112edb59dbe55b68"} Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.030208 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.129738 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ssh-key-openstack-edpm-ipam\") pod \"c74306ca-1c03-4b19-b9cf-173122cdada0\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.129945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-inventory\") pod \"c74306ca-1c03-4b19-b9cf-173122cdada0\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.130000 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddhsc\" (UniqueName: \"kubernetes.io/projected/c74306ca-1c03-4b19-b9cf-173122cdada0-kube-api-access-ddhsc\") pod \"c74306ca-1c03-4b19-b9cf-173122cdada0\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.130064 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ceph\") pod \"c74306ca-1c03-4b19-b9cf-173122cdada0\" (UID: \"c74306ca-1c03-4b19-b9cf-173122cdada0\") " Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.139148 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ceph" (OuterVolumeSpecName: "ceph") pod "c74306ca-1c03-4b19-b9cf-173122cdada0" (UID: "c74306ca-1c03-4b19-b9cf-173122cdada0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.140906 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74306ca-1c03-4b19-b9cf-173122cdada0-kube-api-access-ddhsc" (OuterVolumeSpecName: "kube-api-access-ddhsc") pod "c74306ca-1c03-4b19-b9cf-173122cdada0" (UID: "c74306ca-1c03-4b19-b9cf-173122cdada0"). InnerVolumeSpecName "kube-api-access-ddhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.159899 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-inventory" (OuterVolumeSpecName: "inventory") pod "c74306ca-1c03-4b19-b9cf-173122cdada0" (UID: "c74306ca-1c03-4b19-b9cf-173122cdada0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.160574 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c74306ca-1c03-4b19-b9cf-173122cdada0" (UID: "c74306ca-1c03-4b19-b9cf-173122cdada0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.232339 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.232375 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddhsc\" (UniqueName: \"kubernetes.io/projected/c74306ca-1c03-4b19-b9cf-173122cdada0-kube-api-access-ddhsc\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.232387 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.232395 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c74306ca-1c03-4b19-b9cf-173122cdada0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.656169 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" event={"ID":"c74306ca-1c03-4b19-b9cf-173122cdada0","Type":"ContainerDied","Data":"70df3a8bd2d5582cde6140e28602513cdd74fab9686d8f472f2b338c7167f4a7"} Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.656517 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70df3a8bd2d5582cde6140e28602513cdd74fab9686d8f472f2b338c7167f4a7" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.656221 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qpp6k" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.748569 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps"] Jan 31 08:14:18 crc kubenswrapper[4826]: E0131 08:14:18.749165 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74306ca-1c03-4b19-b9cf-173122cdada0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.749205 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74306ca-1c03-4b19-b9cf-173122cdada0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.749482 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74306ca-1c03-4b19-b9cf-173122cdada0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.750529 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.752514 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.754293 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.754480 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.754866 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.755026 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.770609 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps"] Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.843464 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrl8\" (UniqueName: \"kubernetes.io/projected/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-kube-api-access-gsrl8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.843522 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.843616 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.843659 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.945521 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.945889 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.946109 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrl8\" (UniqueName: \"kubernetes.io/projected/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-kube-api-access-gsrl8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.946253 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.950785 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.950890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.951892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:18 crc kubenswrapper[4826]: I0131 08:14:18.962638 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrl8\" (UniqueName: \"kubernetes.io/projected/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-kube-api-access-gsrl8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:19 crc kubenswrapper[4826]: I0131 08:14:19.076631 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:19 crc kubenswrapper[4826]: I0131 08:14:19.609854 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps"] Jan 31 08:14:19 crc kubenswrapper[4826]: I0131 08:14:19.665003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" event={"ID":"1b469d06-e2d7-4c7e-a61d-b2e76fd42191","Type":"ContainerStarted","Data":"02a507555d66a20e6cc536c4c3b40d5935a9ccc2f49734c88c5895c0c69e0813"} Jan 31 08:14:20 crc kubenswrapper[4826]: I0131 08:14:20.673549 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" event={"ID":"1b469d06-e2d7-4c7e-a61d-b2e76fd42191","Type":"ContainerStarted","Data":"ca41a6e17ca7b386ef732d3a57c9d52525775921105b61a4254ed025f8751a0c"} Jan 31 08:14:20 crc kubenswrapper[4826]: I0131 08:14:20.693744 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" podStartSLOduration=2.182418276 podStartE2EDuration="2.693726122s" podCreationTimestamp="2026-01-31 08:14:18 +0000 UTC" firstStartedPulling="2026-01-31 08:14:19.61208472 +0000 UTC m=+2291.465971079" lastFinishedPulling="2026-01-31 08:14:20.123392566 +0000 UTC m=+2291.977278925" observedRunningTime="2026-01-31 08:14:20.693067873 +0000 UTC m=+2292.546954242" watchObservedRunningTime="2026-01-31 08:14:20.693726122 +0000 UTC m=+2292.547612471" Jan 31 08:14:27 crc kubenswrapper[4826]: I0131 08:14:27.377358 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:14:27 crc kubenswrapper[4826]: I0131 08:14:27.377885 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:14:29 crc kubenswrapper[4826]: I0131 08:14:29.738699 4826 generic.go:334] "Generic (PLEG): container finished" podID="1b469d06-e2d7-4c7e-a61d-b2e76fd42191" containerID="ca41a6e17ca7b386ef732d3a57c9d52525775921105b61a4254ed025f8751a0c" exitCode=0 Jan 31 08:14:29 crc kubenswrapper[4826]: I0131 08:14:29.738771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" event={"ID":"1b469d06-e2d7-4c7e-a61d-b2e76fd42191","Type":"ContainerDied","Data":"ca41a6e17ca7b386ef732d3a57c9d52525775921105b61a4254ed025f8751a0c"} Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.123939 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.177161 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsrl8\" (UniqueName: \"kubernetes.io/projected/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-kube-api-access-gsrl8\") pod \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.178174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-inventory\") pod \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.178388 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ceph\") pod \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.178510 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ssh-key-openstack-edpm-ipam\") pod \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\" (UID: \"1b469d06-e2d7-4c7e-a61d-b2e76fd42191\") " Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.183068 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-kube-api-access-gsrl8" (OuterVolumeSpecName: "kube-api-access-gsrl8") pod "1b469d06-e2d7-4c7e-a61d-b2e76fd42191" (UID: "1b469d06-e2d7-4c7e-a61d-b2e76fd42191"). InnerVolumeSpecName "kube-api-access-gsrl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.187108 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ceph" (OuterVolumeSpecName: "ceph") pod "1b469d06-e2d7-4c7e-a61d-b2e76fd42191" (UID: "1b469d06-e2d7-4c7e-a61d-b2e76fd42191"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.203924 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-inventory" (OuterVolumeSpecName: "inventory") pod "1b469d06-e2d7-4c7e-a61d-b2e76fd42191" (UID: "1b469d06-e2d7-4c7e-a61d-b2e76fd42191"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.204982 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b469d06-e2d7-4c7e-a61d-b2e76fd42191" (UID: "1b469d06-e2d7-4c7e-a61d-b2e76fd42191"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.280172 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsrl8\" (UniqueName: \"kubernetes.io/projected/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-kube-api-access-gsrl8\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.280199 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.280211 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.280222 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b469d06-e2d7-4c7e-a61d-b2e76fd42191-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.757634 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" event={"ID":"1b469d06-e2d7-4c7e-a61d-b2e76fd42191","Type":"ContainerDied","Data":"02a507555d66a20e6cc536c4c3b40d5935a9ccc2f49734c88c5895c0c69e0813"} Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.757679 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a507555d66a20e6cc536c4c3b40d5935a9ccc2f49734c88c5895c0c69e0813" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.757740 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.853752 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6"] Jan 31 08:14:31 crc kubenswrapper[4826]: E0131 08:14:31.855203 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b469d06-e2d7-4c7e-a61d-b2e76fd42191" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.855230 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b469d06-e2d7-4c7e-a61d-b2e76fd42191" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.855452 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b469d06-e2d7-4c7e-a61d-b2e76fd42191" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.856254 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.863853 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.864067 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.864117 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.864248 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.866738 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.866906 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.866965 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.867122 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.873215 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6"] Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.889784 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xf6\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-kube-api-access-j8xf6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.889888 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.889992 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890017 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890074 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890110 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890158 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890183 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890247 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890313 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890342 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.890391 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.991809 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.991883 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.991905 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.991941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992038 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992076 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992132 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992173 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992195 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992220 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.992239 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xf6\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-kube-api-access-j8xf6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.996154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.996459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.996985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.997365 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.997923 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.998379 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.998378 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.998673 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.999596 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:31 crc kubenswrapper[4826]: I0131 08:14:31.999741 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.006127 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.007115 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.009505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xf6\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-kube-api-access-j8xf6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.182320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.693808 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6"] Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.769267 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" event={"ID":"801bcd0e-4229-479d-9b21-7b6d71339a15","Type":"ContainerStarted","Data":"d797963d15bf7d3fa64663736c387623123feac29696959b42557a07c9f8c755"} Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.846917 4826 scope.go:117] "RemoveContainer" containerID="7cb0ce0fed71e477f145381a92e688f0509e6197352b346001707c707dffe081" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.867257 4826 scope.go:117] "RemoveContainer" containerID="31d30a0d4287d1253f7c878329a4acecc0cd87d29934400bde1f6547c589c05b" Jan 31 08:14:32 crc kubenswrapper[4826]: I0131 08:14:32.905713 4826 scope.go:117] "RemoveContainer" containerID="bb44de3c82c70ec41f4504f4611c0994f04495f14ec29d293a8a67b9dad3d5c3" Jan 31 08:14:33 crc kubenswrapper[4826]: I0131 08:14:33.778524 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" event={"ID":"801bcd0e-4229-479d-9b21-7b6d71339a15","Type":"ContainerStarted","Data":"58d049afd2952e2673473328a781356d592f841eb9d666db887a77ddff73e6e5"} Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.377012 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.377615 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.377672 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.379162 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.379274 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" gracePeriod=600 Jan 31 08:14:57 crc kubenswrapper[4826]: E0131 08:14:57.509115 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.991825 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" exitCode=0 Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.991907 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8"} Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.992430 4826 scope.go:117] "RemoveContainer" containerID="ab691b619de71e82ee7b4f8aac2ac93883f05b3da2dd0cdc5dc1c271473c6187" Jan 31 08:14:57 crc kubenswrapper[4826]: I0131 08:14:57.993420 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:14:57 crc kubenswrapper[4826]: E0131 08:14:57.993776 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:14:58 crc kubenswrapper[4826]: I0131 08:14:58.026862 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" podStartSLOduration=26.636530506 podStartE2EDuration="27.026840023s" podCreationTimestamp="2026-01-31 08:14:31 +0000 UTC" firstStartedPulling="2026-01-31 08:14:32.695888769 +0000 UTC m=+2304.549775128" lastFinishedPulling="2026-01-31 08:14:33.086198286 +0000 UTC m=+2304.940084645" observedRunningTime="2026-01-31 08:14:33.799285451 +0000 UTC m=+2305.653171800" watchObservedRunningTime="2026-01-31 08:14:58.026840023 +0000 UTC m=+2329.880726382" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.159062 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9"] Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.160607 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.163506 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.168831 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9"] Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.169581 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.220704 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-config-volume\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.220995 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdfl\" (UniqueName: \"kubernetes.io/projected/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-kube-api-access-wwdfl\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.221214 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-secret-volume\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.323621 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-config-volume\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.323721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwdfl\" (UniqueName: \"kubernetes.io/projected/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-kube-api-access-wwdfl\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.323794 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-secret-volume\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.324533 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-config-volume\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.329662 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-secret-volume\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.343259 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwdfl\" (UniqueName: \"kubernetes.io/projected/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-kube-api-access-wwdfl\") pod \"collect-profiles-29497455-jzhb9\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.485514 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:00 crc kubenswrapper[4826]: I0131 08:15:00.968846 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9"] Jan 31 08:15:01 crc kubenswrapper[4826]: I0131 08:15:01.028902 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" event={"ID":"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb","Type":"ContainerStarted","Data":"ad9a412b5386de72eabcc608922fe01d7850d6cc699bad8de84fb09263624f93"} Jan 31 08:15:02 crc kubenswrapper[4826]: I0131 08:15:02.037549 4826 generic.go:334] "Generic (PLEG): container finished" podID="b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" containerID="08d32ba8cf7bd36f3adf0668b5772a531a2d4483937fac3309abff0316b66267" exitCode=0 Jan 31 08:15:02 crc kubenswrapper[4826]: I0131 08:15:02.037657 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" event={"ID":"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb","Type":"ContainerDied","Data":"08d32ba8cf7bd36f3adf0668b5772a531a2d4483937fac3309abff0316b66267"} Jan 31 08:15:02 crc kubenswrapper[4826]: I0131 08:15:02.039284 4826 generic.go:334] "Generic (PLEG): container finished" podID="801bcd0e-4229-479d-9b21-7b6d71339a15" containerID="58d049afd2952e2673473328a781356d592f841eb9d666db887a77ddff73e6e5" exitCode=0 Jan 31 08:15:02 crc kubenswrapper[4826]: I0131 08:15:02.039313 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" event={"ID":"801bcd0e-4229-479d-9b21-7b6d71339a15","Type":"ContainerDied","Data":"58d049afd2952e2673473328a781356d592f841eb9d666db887a77ddff73e6e5"} Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.416142 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.422996 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488426 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-repo-setup-combined-ca-bundle\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488468 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-config-volume\") pod \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488519 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-bootstrap-combined-ca-bundle\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488560 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ssh-key-openstack-edpm-ipam\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ovn-combined-ca-bundle\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488625 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-libvirt-combined-ca-bundle\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488647 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ceph\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488671 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488695 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xf6\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-kube-api-access-j8xf6\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488746 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488777 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-nova-combined-ca-bundle\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488797 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-ovn-default-certs-0\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-inventory\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488855 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-secret-volume\") pod \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488880 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-neutron-metadata-combined-ca-bundle\") pod \"801bcd0e-4229-479d-9b21-7b6d71339a15\" (UID: \"801bcd0e-4229-479d-9b21-7b6d71339a15\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.488902 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwdfl\" (UniqueName: \"kubernetes.io/projected/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-kube-api-access-wwdfl\") pod \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\" (UID: \"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb\") " Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.490413 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-config-volume" (OuterVolumeSpecName: "config-volume") pod "b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" (UID: "b92b8804-07a2-4ac5-b431-fcd9f2fbeacb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.495356 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.495515 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.495695 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-kube-api-access-j8xf6" (OuterVolumeSpecName: "kube-api-access-j8xf6") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "kube-api-access-j8xf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.496147 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.496626 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-kube-api-access-wwdfl" (OuterVolumeSpecName: "kube-api-access-wwdfl") pod "b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" (UID: "b92b8804-07a2-4ac5-b431-fcd9f2fbeacb"). InnerVolumeSpecName "kube-api-access-wwdfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.497201 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.497488 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.497691 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.498373 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.498461 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.498654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ceph" (OuterVolumeSpecName: "ceph") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.499022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" (UID: "b92b8804-07a2-4ac5-b431-fcd9f2fbeacb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.499525 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.520461 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-inventory" (OuterVolumeSpecName: "inventory") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.523226 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "801bcd0e-4229-479d-9b21-7b6d71339a15" (UID: "801bcd0e-4229-479d-9b21-7b6d71339a15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591400 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591433 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591444 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwdfl\" (UniqueName: \"kubernetes.io/projected/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-kube-api-access-wwdfl\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591458 4826 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591468 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591476 4826 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591486 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591494 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591501 4826 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591511 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591520 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591529 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xf6\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-kube-api-access-j8xf6\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591537 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591547 4826 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591557 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/801bcd0e-4229-479d-9b21-7b6d71339a15-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:03 crc kubenswrapper[4826]: I0131 08:15:03.591566 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/801bcd0e-4229-479d-9b21-7b6d71339a15-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.057386 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" event={"ID":"b92b8804-07a2-4ac5-b431-fcd9f2fbeacb","Type":"ContainerDied","Data":"ad9a412b5386de72eabcc608922fe01d7850d6cc699bad8de84fb09263624f93"} Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.057430 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad9a412b5386de72eabcc608922fe01d7850d6cc699bad8de84fb09263624f93" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.057511 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.059375 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" event={"ID":"801bcd0e-4229-479d-9b21-7b6d71339a15","Type":"ContainerDied","Data":"d797963d15bf7d3fa64663736c387623123feac29696959b42557a07c9f8c755"} Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.059418 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d797963d15bf7d3fa64663736c387623123feac29696959b42557a07c9f8c755" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.059478 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.198187 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w"] Jan 31 08:15:04 crc kubenswrapper[4826]: E0131 08:15:04.198533 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" containerName="collect-profiles" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.198547 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" containerName="collect-profiles" Jan 31 08:15:04 crc kubenswrapper[4826]: E0131 08:15:04.198574 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801bcd0e-4229-479d-9b21-7b6d71339a15" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.198581 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="801bcd0e-4229-479d-9b21-7b6d71339a15" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.198745 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="801bcd0e-4229-479d-9b21-7b6d71339a15" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.198765 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" containerName="collect-profiles" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.199323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.201778 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.201818 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.201833 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.201946 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.202199 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.209840 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w"] Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.304517 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.304860 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.304896 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6rg\" (UniqueName: \"kubernetes.io/projected/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-kube-api-access-zn6rg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.305065 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.406485 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.406573 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.406609 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.406641 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6rg\" (UniqueName: \"kubernetes.io/projected/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-kube-api-access-zn6rg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.410987 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.412116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.418231 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.424401 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6rg\" (UniqueName: \"kubernetes.io/projected/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-kube-api-access-zn6rg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.487408 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj"] Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.497335 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497410-bfxcj"] Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.523636 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:04 crc kubenswrapper[4826]: I0131 08:15:04.823661 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125e0e2a-6c4a-487f-ab4e-fb439ba80bc0" path="/var/lib/kubelet/pods/125e0e2a-6c4a-487f-ab4e-fb439ba80bc0/volumes" Jan 31 08:15:05 crc kubenswrapper[4826]: W0131 08:15:05.034719 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ffa923f_5c55_4d65_86bf_a6dbc1fde423.slice/crio-e65c27b30ba2eea01d1db32e5b624fcffc82a46f6a035cea38dde038cd9b0a9e WatchSource:0}: Error finding container e65c27b30ba2eea01d1db32e5b624fcffc82a46f6a035cea38dde038cd9b0a9e: Status 404 returned error can't find the container with id e65c27b30ba2eea01d1db32e5b624fcffc82a46f6a035cea38dde038cd9b0a9e Jan 31 08:15:05 crc kubenswrapper[4826]: I0131 08:15:05.037710 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w"] Jan 31 08:15:05 crc kubenswrapper[4826]: I0131 08:15:05.068993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" event={"ID":"8ffa923f-5c55-4d65-86bf-a6dbc1fde423","Type":"ContainerStarted","Data":"e65c27b30ba2eea01d1db32e5b624fcffc82a46f6a035cea38dde038cd9b0a9e"} Jan 31 08:15:06 crc kubenswrapper[4826]: I0131 08:15:06.078798 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" event={"ID":"8ffa923f-5c55-4d65-86bf-a6dbc1fde423","Type":"ContainerStarted","Data":"329b5e3cf4ee97fa79779c60152503fc4ad207b6b45fb0c41f6f4117c453cd74"} Jan 31 08:15:06 crc kubenswrapper[4826]: I0131 08:15:06.100199 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" podStartSLOduration=1.6350071000000002 podStartE2EDuration="2.100120814s" podCreationTimestamp="2026-01-31 08:15:04 +0000 UTC" firstStartedPulling="2026-01-31 08:15:05.037288965 +0000 UTC m=+2336.891175324" lastFinishedPulling="2026-01-31 08:15:05.502402669 +0000 UTC m=+2337.356289038" observedRunningTime="2026-01-31 08:15:06.091466668 +0000 UTC m=+2337.945353027" watchObservedRunningTime="2026-01-31 08:15:06.100120814 +0000 UTC m=+2337.954007183" Jan 31 08:15:11 crc kubenswrapper[4826]: I0131 08:15:11.118333 4826 generic.go:334] "Generic (PLEG): container finished" podID="8ffa923f-5c55-4d65-86bf-a6dbc1fde423" containerID="329b5e3cf4ee97fa79779c60152503fc4ad207b6b45fb0c41f6f4117c453cd74" exitCode=0 Jan 31 08:15:11 crc kubenswrapper[4826]: I0131 08:15:11.118371 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" event={"ID":"8ffa923f-5c55-4d65-86bf-a6dbc1fde423","Type":"ContainerDied","Data":"329b5e3cf4ee97fa79779c60152503fc4ad207b6b45fb0c41f6f4117c453cd74"} Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.500609 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.562456 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ssh-key-openstack-edpm-ipam\") pod \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.562550 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ceph\") pod \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.562697 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-inventory\") pod \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.562743 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn6rg\" (UniqueName: \"kubernetes.io/projected/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-kube-api-access-zn6rg\") pod \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\" (UID: \"8ffa923f-5c55-4d65-86bf-a6dbc1fde423\") " Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.568193 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ceph" (OuterVolumeSpecName: "ceph") pod "8ffa923f-5c55-4d65-86bf-a6dbc1fde423" (UID: "8ffa923f-5c55-4d65-86bf-a6dbc1fde423"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.568328 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-kube-api-access-zn6rg" (OuterVolumeSpecName: "kube-api-access-zn6rg") pod "8ffa923f-5c55-4d65-86bf-a6dbc1fde423" (UID: "8ffa923f-5c55-4d65-86bf-a6dbc1fde423"). InnerVolumeSpecName "kube-api-access-zn6rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.588303 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-inventory" (OuterVolumeSpecName: "inventory") pod "8ffa923f-5c55-4d65-86bf-a6dbc1fde423" (UID: "8ffa923f-5c55-4d65-86bf-a6dbc1fde423"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.591718 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ffa923f-5c55-4d65-86bf-a6dbc1fde423" (UID: "8ffa923f-5c55-4d65-86bf-a6dbc1fde423"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.665512 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.665565 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn6rg\" (UniqueName: \"kubernetes.io/projected/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-kube-api-access-zn6rg\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.665579 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:12 crc kubenswrapper[4826]: I0131 08:15:12.665591 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ffa923f-5c55-4d65-86bf-a6dbc1fde423-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.138255 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" event={"ID":"8ffa923f-5c55-4d65-86bf-a6dbc1fde423","Type":"ContainerDied","Data":"e65c27b30ba2eea01d1db32e5b624fcffc82a46f6a035cea38dde038cd9b0a9e"} Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.138772 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e65c27b30ba2eea01d1db32e5b624fcffc82a46f6a035cea38dde038cd9b0a9e" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.138488 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.205283 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h"] Jan 31 08:15:13 crc kubenswrapper[4826]: E0131 08:15:13.205641 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffa923f-5c55-4d65-86bf-a6dbc1fde423" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.205658 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffa923f-5c55-4d65-86bf-a6dbc1fde423" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.205838 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffa923f-5c55-4d65-86bf-a6dbc1fde423" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.206603 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.211912 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.212572 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.212820 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.213024 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.213366 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.213503 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.229344 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h"] Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.276851 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.277086 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.277177 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.277226 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.277377 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.277510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmz4f\" (UniqueName: \"kubernetes.io/projected/3fd20e7c-b2ff-4784-86ea-e74db981caca-kube-api-access-mmz4f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.379154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmz4f\" (UniqueName: \"kubernetes.io/projected/3fd20e7c-b2ff-4784-86ea-e74db981caca-kube-api-access-mmz4f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.379624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.379803 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.379911 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.380044 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.380147 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.380997 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.384504 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.384776 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.385412 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.387117 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.398061 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmz4f\" (UniqueName: \"kubernetes.io/projected/3fd20e7c-b2ff-4784-86ea-e74db981caca-kube-api-access-mmz4f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jfl9h\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.527728 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:15:13 crc kubenswrapper[4826]: I0131 08:15:13.809292 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:15:13 crc kubenswrapper[4826]: E0131 08:15:13.809895 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:15:14 crc kubenswrapper[4826]: I0131 08:15:14.035694 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h"] Jan 31 08:15:14 crc kubenswrapper[4826]: I0131 08:15:14.148478 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" event={"ID":"3fd20e7c-b2ff-4784-86ea-e74db981caca","Type":"ContainerStarted","Data":"5895d8c0fe7c1d92443ec8340f55fff0fb49323be17a39420ec1e86c50bc9337"} Jan 31 08:15:15 crc kubenswrapper[4826]: I0131 08:15:15.158467 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" event={"ID":"3fd20e7c-b2ff-4784-86ea-e74db981caca","Type":"ContainerStarted","Data":"137c7e5e114cc3f1867c2d8148b101637d48857f28d83157a80fcf343c3d675f"} Jan 31 08:15:15 crc kubenswrapper[4826]: I0131 08:15:15.178022 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" podStartSLOduration=1.769079909 podStartE2EDuration="2.177939744s" podCreationTimestamp="2026-01-31 08:15:13 +0000 UTC" firstStartedPulling="2026-01-31 08:15:14.043622583 +0000 UTC m=+2345.897508932" lastFinishedPulling="2026-01-31 08:15:14.452482408 +0000 UTC m=+2346.306368767" observedRunningTime="2026-01-31 08:15:15.176262207 +0000 UTC m=+2347.030148576" watchObservedRunningTime="2026-01-31 08:15:15.177939744 +0000 UTC m=+2347.031826103" Jan 31 08:15:24 crc kubenswrapper[4826]: I0131 08:15:24.810346 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:15:24 crc kubenswrapper[4826]: E0131 08:15:24.811192 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:15:32 crc kubenswrapper[4826]: I0131 08:15:32.950705 4826 scope.go:117] "RemoveContainer" containerID="e32d367b2e18eb7158f4602f99ef56a2b3ae65848df185a77fc1a15524cb09d8" Jan 31 08:15:39 crc kubenswrapper[4826]: I0131 08:15:39.809524 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:15:39 crc kubenswrapper[4826]: E0131 08:15:39.810322 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:15:54 crc kubenswrapper[4826]: I0131 08:15:54.809175 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:15:54 crc kubenswrapper[4826]: E0131 08:15:54.812368 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:16:08 crc kubenswrapper[4826]: I0131 08:16:08.830694 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:16:08 crc kubenswrapper[4826]: E0131 08:16:08.831420 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:16:17 crc kubenswrapper[4826]: I0131 08:16:17.691561 4826 generic.go:334] "Generic (PLEG): container finished" podID="3fd20e7c-b2ff-4784-86ea-e74db981caca" containerID="137c7e5e114cc3f1867c2d8148b101637d48857f28d83157a80fcf343c3d675f" exitCode=0 Jan 31 08:16:17 crc kubenswrapper[4826]: I0131 08:16:17.691649 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" event={"ID":"3fd20e7c-b2ff-4784-86ea-e74db981caca","Type":"ContainerDied","Data":"137c7e5e114cc3f1867c2d8148b101637d48857f28d83157a80fcf343c3d675f"} Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.033014 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.220816 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ssh-key-openstack-edpm-ipam\") pod \"3fd20e7c-b2ff-4784-86ea-e74db981caca\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.220929 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ceph\") pod \"3fd20e7c-b2ff-4784-86ea-e74db981caca\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.221065 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmz4f\" (UniqueName: \"kubernetes.io/projected/3fd20e7c-b2ff-4784-86ea-e74db981caca-kube-api-access-mmz4f\") pod \"3fd20e7c-b2ff-4784-86ea-e74db981caca\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.221089 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovn-combined-ca-bundle\") pod \"3fd20e7c-b2ff-4784-86ea-e74db981caca\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.221238 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-inventory\") pod \"3fd20e7c-b2ff-4784-86ea-e74db981caca\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.221265 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovncontroller-config-0\") pod \"3fd20e7c-b2ff-4784-86ea-e74db981caca\" (UID: \"3fd20e7c-b2ff-4784-86ea-e74db981caca\") " Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.228015 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3fd20e7c-b2ff-4784-86ea-e74db981caca" (UID: "3fd20e7c-b2ff-4784-86ea-e74db981caca"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.229590 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ceph" (OuterVolumeSpecName: "ceph") pod "3fd20e7c-b2ff-4784-86ea-e74db981caca" (UID: "3fd20e7c-b2ff-4784-86ea-e74db981caca"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.229910 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd20e7c-b2ff-4784-86ea-e74db981caca-kube-api-access-mmz4f" (OuterVolumeSpecName: "kube-api-access-mmz4f") pod "3fd20e7c-b2ff-4784-86ea-e74db981caca" (UID: "3fd20e7c-b2ff-4784-86ea-e74db981caca"). InnerVolumeSpecName "kube-api-access-mmz4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.250100 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3fd20e7c-b2ff-4784-86ea-e74db981caca" (UID: "3fd20e7c-b2ff-4784-86ea-e74db981caca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.250608 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3fd20e7c-b2ff-4784-86ea-e74db981caca" (UID: "3fd20e7c-b2ff-4784-86ea-e74db981caca"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.254935 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-inventory" (OuterVolumeSpecName: "inventory") pod "3fd20e7c-b2ff-4784-86ea-e74db981caca" (UID: "3fd20e7c-b2ff-4784-86ea-e74db981caca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.323644 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmz4f\" (UniqueName: \"kubernetes.io/projected/3fd20e7c-b2ff-4784-86ea-e74db981caca-kube-api-access-mmz4f\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.323684 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.323694 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.323702 4826 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3fd20e7c-b2ff-4784-86ea-e74db981caca-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.323714 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.323723 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fd20e7c-b2ff-4784-86ea-e74db981caca-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.710874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" event={"ID":"3fd20e7c-b2ff-4784-86ea-e74db981caca","Type":"ContainerDied","Data":"5895d8c0fe7c1d92443ec8340f55fff0fb49323be17a39420ec1e86c50bc9337"} Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.710915 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5895d8c0fe7c1d92443ec8340f55fff0fb49323be17a39420ec1e86c50bc9337" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.711010 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jfl9h" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.820236 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq"] Jan 31 08:16:19 crc kubenswrapper[4826]: E0131 08:16:19.820958 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd20e7c-b2ff-4784-86ea-e74db981caca" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.821003 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd20e7c-b2ff-4784-86ea-e74db981caca" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.821242 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd20e7c-b2ff-4784-86ea-e74db981caca" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.821986 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.825389 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.825742 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.825898 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.826121 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.826164 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.826277 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.826316 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.837770 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq"] Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.935741 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfsd9\" (UniqueName: \"kubernetes.io/projected/f8db2e50-d73d-4bcd-a2c4-34cfad360222-kube-api-access-vfsd9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.935806 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.935854 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.936193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.936274 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.936312 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:19 crc kubenswrapper[4826]: I0131 08:16:19.936404 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.037753 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.038600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.038821 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfsd9\" (UniqueName: \"kubernetes.io/projected/f8db2e50-d73d-4bcd-a2c4-34cfad360222-kube-api-access-vfsd9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.039044 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.039790 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.040047 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.040175 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.043603 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.043693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.044323 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.044353 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.044715 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.045517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.058624 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfsd9\" (UniqueName: \"kubernetes.io/projected/f8db2e50-d73d-4bcd-a2c4-34cfad360222-kube-api-access-vfsd9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.163576 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.668881 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq"] Jan 31 08:16:20 crc kubenswrapper[4826]: I0131 08:16:20.735643 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" event={"ID":"f8db2e50-d73d-4bcd-a2c4-34cfad360222","Type":"ContainerStarted","Data":"8a531a5bb20eb5cbe899385365d6a28b20313e7a0130b28522196a9b4654fe59"} Jan 31 08:16:21 crc kubenswrapper[4826]: I0131 08:16:21.745597 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" event={"ID":"f8db2e50-d73d-4bcd-a2c4-34cfad360222","Type":"ContainerStarted","Data":"737536f8ab2838ebf38932462adbb57e957c523853664b27c2a21bcc510b73c7"} Jan 31 08:16:21 crc kubenswrapper[4826]: I0131 08:16:21.772770 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" podStartSLOduration=2.28288424 podStartE2EDuration="2.772749797s" podCreationTimestamp="2026-01-31 08:16:19 +0000 UTC" firstStartedPulling="2026-01-31 08:16:20.671881766 +0000 UTC m=+2412.525768125" lastFinishedPulling="2026-01-31 08:16:21.161747293 +0000 UTC m=+2413.015633682" observedRunningTime="2026-01-31 08:16:21.76442581 +0000 UTC m=+2413.618312169" watchObservedRunningTime="2026-01-31 08:16:21.772749797 +0000 UTC m=+2413.626636166" Jan 31 08:16:22 crc kubenswrapper[4826]: I0131 08:16:22.812709 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:16:22 crc kubenswrapper[4826]: E0131 08:16:22.813490 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:16:33 crc kubenswrapper[4826]: I0131 08:16:33.808819 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:16:33 crc kubenswrapper[4826]: E0131 08:16:33.809637 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:16:45 crc kubenswrapper[4826]: I0131 08:16:45.808799 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:16:45 crc kubenswrapper[4826]: E0131 08:16:45.809683 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:16:56 crc kubenswrapper[4826]: I0131 08:16:56.808717 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:16:56 crc kubenswrapper[4826]: E0131 08:16:56.809515 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:17:09 crc kubenswrapper[4826]: I0131 08:17:09.809480 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:17:09 crc kubenswrapper[4826]: E0131 08:17:09.810282 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:17:19 crc kubenswrapper[4826]: I0131 08:17:19.242652 4826 generic.go:334] "Generic (PLEG): container finished" podID="f8db2e50-d73d-4bcd-a2c4-34cfad360222" containerID="737536f8ab2838ebf38932462adbb57e957c523853664b27c2a21bcc510b73c7" exitCode=0 Jan 31 08:17:19 crc kubenswrapper[4826]: I0131 08:17:19.242752 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" event={"ID":"f8db2e50-d73d-4bcd-a2c4-34cfad360222","Type":"ContainerDied","Data":"737536f8ab2838ebf38932462adbb57e957c523853664b27c2a21bcc510b73c7"} Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.652057 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840046 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ceph\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840099 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840131 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfsd9\" (UniqueName: \"kubernetes.io/projected/f8db2e50-d73d-4bcd-a2c4-34cfad360222-kube-api-access-vfsd9\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840238 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-inventory\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840305 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-nova-metadata-neutron-config-0\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840468 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ssh-key-openstack-edpm-ipam\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.840513 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-metadata-combined-ca-bundle\") pod \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\" (UID: \"f8db2e50-d73d-4bcd-a2c4-34cfad360222\") " Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.845834 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ceph" (OuterVolumeSpecName: "ceph") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.847919 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.848008 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8db2e50-d73d-4bcd-a2c4-34cfad360222-kube-api-access-vfsd9" (OuterVolumeSpecName: "kube-api-access-vfsd9") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "kube-api-access-vfsd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.871738 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.876747 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.887564 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-inventory" (OuterVolumeSpecName: "inventory") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.902136 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f8db2e50-d73d-4bcd-a2c4-34cfad360222" (UID: "f8db2e50-d73d-4bcd-a2c4-34cfad360222"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943047 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943078 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943092 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfsd9\" (UniqueName: \"kubernetes.io/projected/f8db2e50-d73d-4bcd-a2c4-34cfad360222-kube-api-access-vfsd9\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943101 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943111 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943120 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:20 crc kubenswrapper[4826]: I0131 08:17:20.943129 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8db2e50-d73d-4bcd-a2c4-34cfad360222-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.261122 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" event={"ID":"f8db2e50-d73d-4bcd-a2c4-34cfad360222","Type":"ContainerDied","Data":"8a531a5bb20eb5cbe899385365d6a28b20313e7a0130b28522196a9b4654fe59"} Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.261168 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a531a5bb20eb5cbe899385365d6a28b20313e7a0130b28522196a9b4654fe59" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.261310 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.377894 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq"] Jan 31 08:17:21 crc kubenswrapper[4826]: E0131 08:17:21.378374 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8db2e50-d73d-4bcd-a2c4-34cfad360222" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.378394 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8db2e50-d73d-4bcd-a2c4-34cfad360222" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.378570 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8db2e50-d73d-4bcd-a2c4-34cfad360222" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.379245 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.381516 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.381667 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.381842 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.381985 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.382165 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.382363 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.386608 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq"] Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.553849 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.554192 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.554358 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.554535 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tbg\" (UniqueName: \"kubernetes.io/projected/cae3795a-7ee0-4ca7-aada-7f03190fb437-kube-api-access-f4tbg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.554593 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.554803 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.656720 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.657392 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tbg\" (UniqueName: \"kubernetes.io/projected/cae3795a-7ee0-4ca7-aada-7f03190fb437-kube-api-access-f4tbg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.657442 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.657526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.657571 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.657615 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.662513 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.662529 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.662826 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.663480 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.667627 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.676798 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tbg\" (UniqueName: \"kubernetes.io/projected/cae3795a-7ee0-4ca7-aada-7f03190fb437-kube-api-access-f4tbg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g74tq\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:21 crc kubenswrapper[4826]: I0131 08:17:21.707203 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:17:22 crc kubenswrapper[4826]: I0131 08:17:22.242747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq"] Jan 31 08:17:22 crc kubenswrapper[4826]: W0131 08:17:22.243365 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae3795a_7ee0_4ca7_aada_7f03190fb437.slice/crio-2bee9c57dc851ea6eafcb462c1c5f6207b58e6653a319393e96a48f246be8844 WatchSource:0}: Error finding container 2bee9c57dc851ea6eafcb462c1c5f6207b58e6653a319393e96a48f246be8844: Status 404 returned error can't find the container with id 2bee9c57dc851ea6eafcb462c1c5f6207b58e6653a319393e96a48f246be8844 Jan 31 08:17:22 crc kubenswrapper[4826]: I0131 08:17:22.270269 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" event={"ID":"cae3795a-7ee0-4ca7-aada-7f03190fb437","Type":"ContainerStarted","Data":"2bee9c57dc851ea6eafcb462c1c5f6207b58e6653a319393e96a48f246be8844"} Jan 31 08:17:23 crc kubenswrapper[4826]: I0131 08:17:23.809280 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:17:23 crc kubenswrapper[4826]: E0131 08:17:23.809847 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:17:24 crc kubenswrapper[4826]: I0131 08:17:24.286462 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" event={"ID":"cae3795a-7ee0-4ca7-aada-7f03190fb437","Type":"ContainerStarted","Data":"bf55c3e3e217461020863ff9b42def59fb1841fd7780d8fa7cb01c288160c55a"} Jan 31 08:17:24 crc kubenswrapper[4826]: I0131 08:17:24.307002 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" podStartSLOduration=1.933831898 podStartE2EDuration="3.306983595s" podCreationTimestamp="2026-01-31 08:17:21 +0000 UTC" firstStartedPulling="2026-01-31 08:17:22.245603287 +0000 UTC m=+2474.099489646" lastFinishedPulling="2026-01-31 08:17:23.618754974 +0000 UTC m=+2475.472641343" observedRunningTime="2026-01-31 08:17:24.301074716 +0000 UTC m=+2476.154961075" watchObservedRunningTime="2026-01-31 08:17:24.306983595 +0000 UTC m=+2476.160869954" Jan 31 08:17:35 crc kubenswrapper[4826]: I0131 08:17:35.808886 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:17:35 crc kubenswrapper[4826]: E0131 08:17:35.809759 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:17:46 crc kubenswrapper[4826]: I0131 08:17:46.808758 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:17:46 crc kubenswrapper[4826]: E0131 08:17:46.809679 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.277379 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tsk48"] Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.279771 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.293319 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tsk48"] Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.311242 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfl5\" (UniqueName: \"kubernetes.io/projected/45af7810-66bc-4df7-a36b-d62458601fb8-kube-api-access-7rfl5\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.311371 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-catalog-content\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.311436 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-utilities\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.412805 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-catalog-content\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.412898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-utilities\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.413010 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rfl5\" (UniqueName: \"kubernetes.io/projected/45af7810-66bc-4df7-a36b-d62458601fb8-kube-api-access-7rfl5\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.413496 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-catalog-content\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.413772 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-utilities\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.443856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rfl5\" (UniqueName: \"kubernetes.io/projected/45af7810-66bc-4df7-a36b-d62458601fb8-kube-api-access-7rfl5\") pod \"certified-operators-tsk48\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:00 crc kubenswrapper[4826]: I0131 08:18:00.600605 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:01 crc kubenswrapper[4826]: I0131 08:18:01.093679 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tsk48"] Jan 31 08:18:01 crc kubenswrapper[4826]: W0131 08:18:01.096280 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45af7810_66bc_4df7_a36b_d62458601fb8.slice/crio-dd7ac306801c4512385146754b07fb422c0cb233abb08a42b1897436b35e4900 WatchSource:0}: Error finding container dd7ac306801c4512385146754b07fb422c0cb233abb08a42b1897436b35e4900: Status 404 returned error can't find the container with id dd7ac306801c4512385146754b07fb422c0cb233abb08a42b1897436b35e4900 Jan 31 08:18:01 crc kubenswrapper[4826]: I0131 08:18:01.587368 4826 generic.go:334] "Generic (PLEG): container finished" podID="45af7810-66bc-4df7-a36b-d62458601fb8" containerID="2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6" exitCode=0 Jan 31 08:18:01 crc kubenswrapper[4826]: I0131 08:18:01.587466 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tsk48" event={"ID":"45af7810-66bc-4df7-a36b-d62458601fb8","Type":"ContainerDied","Data":"2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6"} Jan 31 08:18:01 crc kubenswrapper[4826]: I0131 08:18:01.587680 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tsk48" event={"ID":"45af7810-66bc-4df7-a36b-d62458601fb8","Type":"ContainerStarted","Data":"dd7ac306801c4512385146754b07fb422c0cb233abb08a42b1897436b35e4900"} Jan 31 08:18:01 crc kubenswrapper[4826]: I0131 08:18:01.809053 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:18:01 crc kubenswrapper[4826]: E0131 08:18:01.809460 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:18:02 crc kubenswrapper[4826]: I0131 08:18:02.597461 4826 generic.go:334] "Generic (PLEG): container finished" podID="45af7810-66bc-4df7-a36b-d62458601fb8" containerID="ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d" exitCode=0 Jan 31 08:18:02 crc kubenswrapper[4826]: I0131 08:18:02.597578 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tsk48" event={"ID":"45af7810-66bc-4df7-a36b-d62458601fb8","Type":"ContainerDied","Data":"ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d"} Jan 31 08:18:03 crc kubenswrapper[4826]: I0131 08:18:03.607997 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tsk48" event={"ID":"45af7810-66bc-4df7-a36b-d62458601fb8","Type":"ContainerStarted","Data":"d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498"} Jan 31 08:18:10 crc kubenswrapper[4826]: I0131 08:18:10.600925 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:10 crc kubenswrapper[4826]: I0131 08:18:10.601569 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:10 crc kubenswrapper[4826]: I0131 08:18:10.644784 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:10 crc kubenswrapper[4826]: I0131 08:18:10.680618 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tsk48" podStartSLOduration=9.238846058 podStartE2EDuration="10.680587146s" podCreationTimestamp="2026-01-31 08:18:00 +0000 UTC" firstStartedPulling="2026-01-31 08:18:01.589408426 +0000 UTC m=+2513.443294785" lastFinishedPulling="2026-01-31 08:18:03.031149514 +0000 UTC m=+2514.885035873" observedRunningTime="2026-01-31 08:18:03.63064321 +0000 UTC m=+2515.484529589" watchObservedRunningTime="2026-01-31 08:18:10.680587146 +0000 UTC m=+2522.534473505" Jan 31 08:18:10 crc kubenswrapper[4826]: I0131 08:18:10.717912 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:10 crc kubenswrapper[4826]: I0131 08:18:10.882799 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tsk48"] Jan 31 08:18:12 crc kubenswrapper[4826]: I0131 08:18:12.682286 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tsk48" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="registry-server" containerID="cri-o://d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498" gracePeriod=2 Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.351164 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.401218 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-catalog-content\") pod \"45af7810-66bc-4df7-a36b-d62458601fb8\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.401350 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-utilities\") pod \"45af7810-66bc-4df7-a36b-d62458601fb8\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.401514 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rfl5\" (UniqueName: \"kubernetes.io/projected/45af7810-66bc-4df7-a36b-d62458601fb8-kube-api-access-7rfl5\") pod \"45af7810-66bc-4df7-a36b-d62458601fb8\" (UID: \"45af7810-66bc-4df7-a36b-d62458601fb8\") " Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.402832 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-utilities" (OuterVolumeSpecName: "utilities") pod "45af7810-66bc-4df7-a36b-d62458601fb8" (UID: "45af7810-66bc-4df7-a36b-d62458601fb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.408070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45af7810-66bc-4df7-a36b-d62458601fb8-kube-api-access-7rfl5" (OuterVolumeSpecName: "kube-api-access-7rfl5") pod "45af7810-66bc-4df7-a36b-d62458601fb8" (UID: "45af7810-66bc-4df7-a36b-d62458601fb8"). InnerVolumeSpecName "kube-api-access-7rfl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.450767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45af7810-66bc-4df7-a36b-d62458601fb8" (UID: "45af7810-66bc-4df7-a36b-d62458601fb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.503277 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.503314 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af7810-66bc-4df7-a36b-d62458601fb8-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.503326 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rfl5\" (UniqueName: \"kubernetes.io/projected/45af7810-66bc-4df7-a36b-d62458601fb8-kube-api-access-7rfl5\") on node \"crc\" DevicePath \"\"" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.702261 4826 generic.go:334] "Generic (PLEG): container finished" podID="45af7810-66bc-4df7-a36b-d62458601fb8" containerID="d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498" exitCode=0 Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.702319 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tsk48" event={"ID":"45af7810-66bc-4df7-a36b-d62458601fb8","Type":"ContainerDied","Data":"d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498"} Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.702356 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tsk48" event={"ID":"45af7810-66bc-4df7-a36b-d62458601fb8","Type":"ContainerDied","Data":"dd7ac306801c4512385146754b07fb422c0cb233abb08a42b1897436b35e4900"} Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.702360 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tsk48" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.702377 4826 scope.go:117] "RemoveContainer" containerID="d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.742480 4826 scope.go:117] "RemoveContainer" containerID="ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.743448 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tsk48"] Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.753797 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tsk48"] Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.765739 4826 scope.go:117] "RemoveContainer" containerID="2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.807624 4826 scope.go:117] "RemoveContainer" containerID="d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498" Jan 31 08:18:14 crc kubenswrapper[4826]: E0131 08:18:14.808778 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498\": container with ID starting with d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498 not found: ID does not exist" containerID="d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.808898 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498"} err="failed to get container status \"d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498\": rpc error: code = NotFound desc = could not find container \"d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498\": container with ID starting with d06b8bf72974df94686b6dc27f260443fd6e507faf5fbb147b1c697c3c440498 not found: ID does not exist" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.808931 4826 scope.go:117] "RemoveContainer" containerID="ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d" Jan 31 08:18:14 crc kubenswrapper[4826]: E0131 08:18:14.809440 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d\": container with ID starting with ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d not found: ID does not exist" containerID="ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.809578 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d"} err="failed to get container status \"ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d\": rpc error: code = NotFound desc = could not find container \"ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d\": container with ID starting with ec85724a12dc2d42dfccfa0dae62aa25af9fc0897b43cd8023530ffb62a3772d not found: ID does not exist" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.809686 4826 scope.go:117] "RemoveContainer" containerID="2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6" Jan 31 08:18:14 crc kubenswrapper[4826]: E0131 08:18:14.810150 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6\": container with ID starting with 2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6 not found: ID does not exist" containerID="2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.810259 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6"} err="failed to get container status \"2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6\": rpc error: code = NotFound desc = could not find container \"2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6\": container with ID starting with 2bc82534e76650d0111df0f69dd089c9edd14b81093d9b8f9e4be133b05638f6 not found: ID does not exist" Jan 31 08:18:14 crc kubenswrapper[4826]: I0131 08:18:14.820659 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" path="/var/lib/kubelet/pods/45af7810-66bc-4df7-a36b-d62458601fb8/volumes" Jan 31 08:18:15 crc kubenswrapper[4826]: I0131 08:18:15.808743 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:18:15 crc kubenswrapper[4826]: E0131 08:18:15.809156 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:18:26 crc kubenswrapper[4826]: I0131 08:18:26.811126 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:18:26 crc kubenswrapper[4826]: E0131 08:18:26.811952 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:18:40 crc kubenswrapper[4826]: I0131 08:18:40.808789 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:18:40 crc kubenswrapper[4826]: E0131 08:18:40.809641 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:18:51 crc kubenswrapper[4826]: I0131 08:18:51.809795 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:18:51 crc kubenswrapper[4826]: E0131 08:18:51.810854 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:19:02 crc kubenswrapper[4826]: I0131 08:19:02.809436 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:19:02 crc kubenswrapper[4826]: E0131 08:19:02.810706 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.259461 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z8tm7"] Jan 31 08:19:11 crc kubenswrapper[4826]: E0131 08:19:11.263821 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="extract-content" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.263853 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="extract-content" Jan 31 08:19:11 crc kubenswrapper[4826]: E0131 08:19:11.263880 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="extract-utilities" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.263889 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="extract-utilities" Jan 31 08:19:11 crc kubenswrapper[4826]: E0131 08:19:11.263912 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="registry-server" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.263920 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="registry-server" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.265594 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="45af7810-66bc-4df7-a36b-d62458601fb8" containerName="registry-server" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.272717 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.281352 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8tm7"] Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.322269 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbf4\" (UniqueName: \"kubernetes.io/projected/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-kube-api-access-hqbf4\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.322570 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-utilities\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.322789 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-catalog-content\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.425247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqbf4\" (UniqueName: \"kubernetes.io/projected/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-kube-api-access-hqbf4\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.425331 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-utilities\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.425446 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-catalog-content\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.425933 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-catalog-content\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.426614 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-utilities\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.447127 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqbf4\" (UniqueName: \"kubernetes.io/projected/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-kube-api-access-hqbf4\") pod \"redhat-marketplace-z8tm7\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.598782 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:11 crc kubenswrapper[4826]: I0131 08:19:11.881232 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8tm7"] Jan 31 08:19:12 crc kubenswrapper[4826]: I0131 08:19:12.276601 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerStarted","Data":"ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974"} Jan 31 08:19:12 crc kubenswrapper[4826]: I0131 08:19:12.276660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerStarted","Data":"f1a685a89dd7745632e9fe07334c8afd667c0bb31359706ddc5d93e791d14d5b"} Jan 31 08:19:13 crc kubenswrapper[4826]: I0131 08:19:13.289844 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerID="ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974" exitCode=0 Jan 31 08:19:13 crc kubenswrapper[4826]: I0131 08:19:13.289913 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerDied","Data":"ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974"} Jan 31 08:19:13 crc kubenswrapper[4826]: I0131 08:19:13.292736 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:19:14 crc kubenswrapper[4826]: I0131 08:19:14.809240 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:19:14 crc kubenswrapper[4826]: E0131 08:19:14.809646 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:19:15 crc kubenswrapper[4826]: I0131 08:19:15.318152 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerID="14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16" exitCode=0 Jan 31 08:19:15 crc kubenswrapper[4826]: I0131 08:19:15.318351 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerDied","Data":"14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16"} Jan 31 08:19:17 crc kubenswrapper[4826]: I0131 08:19:17.348358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerStarted","Data":"90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33"} Jan 31 08:19:17 crc kubenswrapper[4826]: I0131 08:19:17.374162 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z8tm7" podStartSLOduration=3.548896758 podStartE2EDuration="6.374131058s" podCreationTimestamp="2026-01-31 08:19:11 +0000 UTC" firstStartedPulling="2026-01-31 08:19:13.292504583 +0000 UTC m=+2585.146390942" lastFinishedPulling="2026-01-31 08:19:16.117738883 +0000 UTC m=+2587.971625242" observedRunningTime="2026-01-31 08:19:17.362910698 +0000 UTC m=+2589.216797067" watchObservedRunningTime="2026-01-31 08:19:17.374131058 +0000 UTC m=+2589.228017457" Jan 31 08:19:21 crc kubenswrapper[4826]: I0131 08:19:21.599559 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:21 crc kubenswrapper[4826]: I0131 08:19:21.600063 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:21 crc kubenswrapper[4826]: I0131 08:19:21.666317 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:22 crc kubenswrapper[4826]: I0131 08:19:22.446990 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:22 crc kubenswrapper[4826]: I0131 08:19:22.496335 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8tm7"] Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.408444 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z8tm7" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="registry-server" containerID="cri-o://90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33" gracePeriod=2 Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.836548 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.864862 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-utilities\") pod \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.865363 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqbf4\" (UniqueName: \"kubernetes.io/projected/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-kube-api-access-hqbf4\") pod \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.865957 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-catalog-content\") pod \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\" (UID: \"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2\") " Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.866183 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-utilities" (OuterVolumeSpecName: "utilities") pod "f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" (UID: "f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.867526 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.881270 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-kube-api-access-hqbf4" (OuterVolumeSpecName: "kube-api-access-hqbf4") pod "f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" (UID: "f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2"). InnerVolumeSpecName "kube-api-access-hqbf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.896780 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" (UID: "f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.969599 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqbf4\" (UniqueName: \"kubernetes.io/projected/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-kube-api-access-hqbf4\") on node \"crc\" DevicePath \"\"" Jan 31 08:19:24 crc kubenswrapper[4826]: I0131 08:19:24.969639 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.426501 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerID="90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33" exitCode=0 Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.426555 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerDied","Data":"90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33"} Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.426584 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8tm7" event={"ID":"f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2","Type":"ContainerDied","Data":"f1a685a89dd7745632e9fe07334c8afd667c0bb31359706ddc5d93e791d14d5b"} Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.426602 4826 scope.go:117] "RemoveContainer" containerID="90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.426692 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8tm7" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.454829 4826 scope.go:117] "RemoveContainer" containerID="14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.461595 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8tm7"] Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.470547 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8tm7"] Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.473569 4826 scope.go:117] "RemoveContainer" containerID="ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.531190 4826 scope.go:117] "RemoveContainer" containerID="90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33" Jan 31 08:19:25 crc kubenswrapper[4826]: E0131 08:19:25.532313 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33\": container with ID starting with 90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33 not found: ID does not exist" containerID="90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.532357 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33"} err="failed to get container status \"90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33\": rpc error: code = NotFound desc = could not find container \"90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33\": container with ID starting with 90fa3cb2729739e52427372624d29118dd48b8ec6cee62dae4f64301b6195e33 not found: ID does not exist" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.532386 4826 scope.go:117] "RemoveContainer" containerID="14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16" Jan 31 08:19:25 crc kubenswrapper[4826]: E0131 08:19:25.532753 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16\": container with ID starting with 14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16 not found: ID does not exist" containerID="14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.532777 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16"} err="failed to get container status \"14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16\": rpc error: code = NotFound desc = could not find container \"14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16\": container with ID starting with 14dc24bbae63f2d54603c04eae06b26e037bb5fa3455237ea522e30df26b7d16 not found: ID does not exist" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.532791 4826 scope.go:117] "RemoveContainer" containerID="ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974" Jan 31 08:19:25 crc kubenswrapper[4826]: E0131 08:19:25.533127 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974\": container with ID starting with ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974 not found: ID does not exist" containerID="ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974" Jan 31 08:19:25 crc kubenswrapper[4826]: I0131 08:19:25.533150 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974"} err="failed to get container status \"ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974\": rpc error: code = NotFound desc = could not find container \"ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974\": container with ID starting with ca2146e79845528ee1afe401698a4efc6f816deaee75de676ed7b62d4a144974 not found: ID does not exist" Jan 31 08:19:26 crc kubenswrapper[4826]: I0131 08:19:26.819742 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" path="/var/lib/kubelet/pods/f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2/volumes" Jan 31 08:19:27 crc kubenswrapper[4826]: I0131 08:19:27.809225 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:19:27 crc kubenswrapper[4826]: E0131 08:19:27.809543 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.194220 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-stg8n"] Jan 31 08:19:39 crc kubenswrapper[4826]: E0131 08:19:39.197130 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="extract-utilities" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.197251 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="extract-utilities" Jan 31 08:19:39 crc kubenswrapper[4826]: E0131 08:19:39.197337 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="registry-server" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.197413 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="registry-server" Jan 31 08:19:39 crc kubenswrapper[4826]: E0131 08:19:39.197497 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="extract-content" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.197568 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="extract-content" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.197859 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d855c2-1bf8-4fa0-9509-0c8b69f8d0a2" containerName="registry-server" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.199407 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.207237 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stg8n"] Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.248343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-catalog-content\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.248447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-utilities\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.248500 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwpw\" (UniqueName: \"kubernetes.io/projected/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-kube-api-access-9vwpw\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.349579 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-catalog-content\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.349672 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-utilities\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.349727 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwpw\" (UniqueName: \"kubernetes.io/projected/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-kube-api-access-9vwpw\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.350249 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-catalog-content\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.350310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-utilities\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.368324 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwpw\" (UniqueName: \"kubernetes.io/projected/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-kube-api-access-9vwpw\") pod \"redhat-operators-stg8n\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.521078 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:39 crc kubenswrapper[4826]: I0131 08:19:39.987596 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stg8n"] Jan 31 08:19:40 crc kubenswrapper[4826]: I0131 08:19:40.565323 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerID="d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651" exitCode=0 Jan 31 08:19:40 crc kubenswrapper[4826]: I0131 08:19:40.565433 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerDied","Data":"d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651"} Jan 31 08:19:40 crc kubenswrapper[4826]: I0131 08:19:40.565615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerStarted","Data":"665c323406b9da23a5cf839475b60e933d1b482dfc86215b69ac2b7fed497b3a"} Jan 31 08:19:41 crc kubenswrapper[4826]: I0131 08:19:41.577277 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerStarted","Data":"d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce"} Jan 31 08:19:41 crc kubenswrapper[4826]: I0131 08:19:41.808711 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:19:41 crc kubenswrapper[4826]: E0131 08:19:41.808998 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:19:42 crc kubenswrapper[4826]: I0131 08:19:42.586308 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerID="d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce" exitCode=0 Jan 31 08:19:42 crc kubenswrapper[4826]: I0131 08:19:42.586359 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerDied","Data":"d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce"} Jan 31 08:19:43 crc kubenswrapper[4826]: I0131 08:19:43.596252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerStarted","Data":"4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9"} Jan 31 08:19:43 crc kubenswrapper[4826]: I0131 08:19:43.617056 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-stg8n" podStartSLOduration=2.206773861 podStartE2EDuration="4.617038604s" podCreationTimestamp="2026-01-31 08:19:39 +0000 UTC" firstStartedPulling="2026-01-31 08:19:40.566630598 +0000 UTC m=+2612.420516957" lastFinishedPulling="2026-01-31 08:19:42.976895321 +0000 UTC m=+2614.830781700" observedRunningTime="2026-01-31 08:19:43.610791936 +0000 UTC m=+2615.464678295" watchObservedRunningTime="2026-01-31 08:19:43.617038604 +0000 UTC m=+2615.470924963" Jan 31 08:19:49 crc kubenswrapper[4826]: I0131 08:19:49.521353 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:49 crc kubenswrapper[4826]: I0131 08:19:49.522126 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:49 crc kubenswrapper[4826]: I0131 08:19:49.573751 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:49 crc kubenswrapper[4826]: I0131 08:19:49.690716 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:49 crc kubenswrapper[4826]: I0131 08:19:49.832115 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stg8n"] Jan 31 08:19:51 crc kubenswrapper[4826]: I0131 08:19:51.663649 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-stg8n" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="registry-server" containerID="cri-o://4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9" gracePeriod=2 Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.085491 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.172066 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-catalog-content\") pod \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.172206 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vwpw\" (UniqueName: \"kubernetes.io/projected/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-kube-api-access-9vwpw\") pod \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.172244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-utilities\") pod \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\" (UID: \"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27\") " Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.173552 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-utilities" (OuterVolumeSpecName: "utilities") pod "eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" (UID: "eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.178876 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-kube-api-access-9vwpw" (OuterVolumeSpecName: "kube-api-access-9vwpw") pod "eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" (UID: "eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27"). InnerVolumeSpecName "kube-api-access-9vwpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.274887 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vwpw\" (UniqueName: \"kubernetes.io/projected/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-kube-api-access-9vwpw\") on node \"crc\" DevicePath \"\"" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.274924 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.674636 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerID="4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9" exitCode=0 Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.674667 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerDied","Data":"4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9"} Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.674694 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stg8n" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.674709 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stg8n" event={"ID":"eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27","Type":"ContainerDied","Data":"665c323406b9da23a5cf839475b60e933d1b482dfc86215b69ac2b7fed497b3a"} Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.674727 4826 scope.go:117] "RemoveContainer" containerID="4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.695463 4826 scope.go:117] "RemoveContainer" containerID="d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.732548 4826 scope.go:117] "RemoveContainer" containerID="d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.807828 4826 scope.go:117] "RemoveContainer" containerID="4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9" Jan 31 08:19:52 crc kubenswrapper[4826]: E0131 08:19:52.808595 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9\": container with ID starting with 4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9 not found: ID does not exist" containerID="4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.808850 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9"} err="failed to get container status \"4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9\": rpc error: code = NotFound desc = could not find container \"4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9\": container with ID starting with 4e6253dbb2db1841d09d3ef288da096d665e078f955d0cba14a9c5c673e4c4b9 not found: ID does not exist" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.808882 4826 scope.go:117] "RemoveContainer" containerID="d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce" Jan 31 08:19:52 crc kubenswrapper[4826]: E0131 08:19:52.809877 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce\": container with ID starting with d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce not found: ID does not exist" containerID="d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.810029 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce"} err="failed to get container status \"d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce\": rpc error: code = NotFound desc = could not find container \"d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce\": container with ID starting with d190f9ed05b4f2ebd19e31410fe68e1d328799c479c410c1aa718ffc31eb74ce not found: ID does not exist" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.810132 4826 scope.go:117] "RemoveContainer" containerID="d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651" Jan 31 08:19:52 crc kubenswrapper[4826]: E0131 08:19:52.810598 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651\": container with ID starting with d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651 not found: ID does not exist" containerID="d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651" Jan 31 08:19:52 crc kubenswrapper[4826]: I0131 08:19:52.810654 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651"} err="failed to get container status \"d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651\": rpc error: code = NotFound desc = could not find container \"d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651\": container with ID starting with d603597a18745abe5e54036a527109687743e217fb40a9187a64f167eebdb651 not found: ID does not exist" Jan 31 08:19:53 crc kubenswrapper[4826]: I0131 08:19:53.172835 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" (UID: "eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:19:53 crc kubenswrapper[4826]: I0131 08:19:53.191816 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:19:53 crc kubenswrapper[4826]: I0131 08:19:53.305434 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stg8n"] Jan 31 08:19:53 crc kubenswrapper[4826]: I0131 08:19:53.313594 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-stg8n"] Jan 31 08:19:54 crc kubenswrapper[4826]: I0131 08:19:54.832400 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" path="/var/lib/kubelet/pods/eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27/volumes" Jan 31 08:19:56 crc kubenswrapper[4826]: I0131 08:19:56.810797 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:19:56 crc kubenswrapper[4826]: E0131 08:19:56.811857 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:20:10 crc kubenswrapper[4826]: I0131 08:20:10.424618 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:20:11 crc kubenswrapper[4826]: I0131 08:20:11.452826 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"de646e7eb20e44a926cca255169dcb2595a5e6b359b5609320726326f8a069e5"} Jan 31 08:21:43 crc kubenswrapper[4826]: I0131 08:21:43.252004 4826 generic.go:334] "Generic (PLEG): container finished" podID="cae3795a-7ee0-4ca7-aada-7f03190fb437" containerID="bf55c3e3e217461020863ff9b42def59fb1841fd7780d8fa7cb01c288160c55a" exitCode=0 Jan 31 08:21:43 crc kubenswrapper[4826]: I0131 08:21:43.252189 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" event={"ID":"cae3795a-7ee0-4ca7-aada-7f03190fb437","Type":"ContainerDied","Data":"bf55c3e3e217461020863ff9b42def59fb1841fd7780d8fa7cb01c288160c55a"} Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.692699 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.803384 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ceph\") pod \"cae3795a-7ee0-4ca7-aada-7f03190fb437\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.803508 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tbg\" (UniqueName: \"kubernetes.io/projected/cae3795a-7ee0-4ca7-aada-7f03190fb437-kube-api-access-f4tbg\") pod \"cae3795a-7ee0-4ca7-aada-7f03190fb437\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.803564 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-inventory\") pod \"cae3795a-7ee0-4ca7-aada-7f03190fb437\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.803597 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ssh-key-openstack-edpm-ipam\") pod \"cae3795a-7ee0-4ca7-aada-7f03190fb437\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.803711 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-combined-ca-bundle\") pod \"cae3795a-7ee0-4ca7-aada-7f03190fb437\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.803813 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-secret-0\") pod \"cae3795a-7ee0-4ca7-aada-7f03190fb437\" (UID: \"cae3795a-7ee0-4ca7-aada-7f03190fb437\") " Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.810421 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae3795a-7ee0-4ca7-aada-7f03190fb437-kube-api-access-f4tbg" (OuterVolumeSpecName: "kube-api-access-f4tbg") pod "cae3795a-7ee0-4ca7-aada-7f03190fb437" (UID: "cae3795a-7ee0-4ca7-aada-7f03190fb437"). InnerVolumeSpecName "kube-api-access-f4tbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.811829 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cae3795a-7ee0-4ca7-aada-7f03190fb437" (UID: "cae3795a-7ee0-4ca7-aada-7f03190fb437"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.819513 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ceph" (OuterVolumeSpecName: "ceph") pod "cae3795a-7ee0-4ca7-aada-7f03190fb437" (UID: "cae3795a-7ee0-4ca7-aada-7f03190fb437"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.832938 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "cae3795a-7ee0-4ca7-aada-7f03190fb437" (UID: "cae3795a-7ee0-4ca7-aada-7f03190fb437"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.832964 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cae3795a-7ee0-4ca7-aada-7f03190fb437" (UID: "cae3795a-7ee0-4ca7-aada-7f03190fb437"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.838384 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-inventory" (OuterVolumeSpecName: "inventory") pod "cae3795a-7ee0-4ca7-aada-7f03190fb437" (UID: "cae3795a-7ee0-4ca7-aada-7f03190fb437"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.908038 4826 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.908840 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.909007 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4tbg\" (UniqueName: \"kubernetes.io/projected/cae3795a-7ee0-4ca7-aada-7f03190fb437-kube-api-access-f4tbg\") on node \"crc\" DevicePath \"\"" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.909092 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.909172 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:21:44 crc kubenswrapper[4826]: I0131 08:21:44.909252 4826 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae3795a-7ee0-4ca7-aada-7f03190fb437-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.268890 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" event={"ID":"cae3795a-7ee0-4ca7-aada-7f03190fb437","Type":"ContainerDied","Data":"2bee9c57dc851ea6eafcb462c1c5f6207b58e6653a319393e96a48f246be8844"} Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.268929 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bee9c57dc851ea6eafcb462c1c5f6207b58e6653a319393e96a48f246be8844" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.268949 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g74tq" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.366957 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r"] Jan 31 08:21:45 crc kubenswrapper[4826]: E0131 08:21:45.367631 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="extract-content" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.367702 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="extract-content" Jan 31 08:21:45 crc kubenswrapper[4826]: E0131 08:21:45.367767 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="registry-server" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.367825 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="registry-server" Jan 31 08:21:45 crc kubenswrapper[4826]: E0131 08:21:45.367885 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae3795a-7ee0-4ca7-aada-7f03190fb437" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.367942 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae3795a-7ee0-4ca7-aada-7f03190fb437" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 08:21:45 crc kubenswrapper[4826]: E0131 08:21:45.368026 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="extract-utilities" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.368092 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="extract-utilities" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.368320 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6c1c0d-4100-4fee-b0ec-f8a1fa23fc27" containerName="registry-server" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.368402 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae3795a-7ee0-4ca7-aada-7f03190fb437" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.369097 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372190 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372368 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372468 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372574 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372676 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372808 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.372959 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hbmks" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.373131 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.374766 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.382000 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r"] Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519427 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519491 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519552 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519651 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519729 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519817 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.519858 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxkbm\" (UniqueName: \"kubernetes.io/projected/e0375a71-69b8-4909-b359-f6c66a475f79-kube-api-access-mxkbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.520035 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.520070 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.520135 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.520158 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621633 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621701 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxkbm\" (UniqueName: \"kubernetes.io/projected/e0375a71-69b8-4909-b359-f6c66a475f79-kube-api-access-mxkbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621778 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621816 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621919 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621955 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.621999 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.622024 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.622054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.622997 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.623390 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.626851 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.627193 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.627771 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.628078 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.628129 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.628547 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.628809 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.629399 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.639631 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxkbm\" (UniqueName: \"kubernetes.io/projected/e0375a71-69b8-4909-b359-f6c66a475f79-kube-api-access-mxkbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:45 crc kubenswrapper[4826]: I0131 08:21:45.696488 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:21:46 crc kubenswrapper[4826]: I0131 08:21:46.223058 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r"] Jan 31 08:21:46 crc kubenswrapper[4826]: I0131 08:21:46.283008 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" event={"ID":"e0375a71-69b8-4909-b359-f6c66a475f79","Type":"ContainerStarted","Data":"bba059fea3c3b502f884cb10bcf272a26bcce3b431a7a9cc5e9a014d350e00b1"} Jan 31 08:21:47 crc kubenswrapper[4826]: I0131 08:21:47.297790 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" event={"ID":"e0375a71-69b8-4909-b359-f6c66a475f79","Type":"ContainerStarted","Data":"d20ffd474df2761887fa581c9f6aee96eb209ec0d30486d059df1a7bf9ca4ff2"} Jan 31 08:21:47 crc kubenswrapper[4826]: I0131 08:21:47.328357 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" podStartSLOduration=1.9413381520000002 podStartE2EDuration="2.328335713s" podCreationTimestamp="2026-01-31 08:21:45 +0000 UTC" firstStartedPulling="2026-01-31 08:21:46.23758008 +0000 UTC m=+2738.091466439" lastFinishedPulling="2026-01-31 08:21:46.624577641 +0000 UTC m=+2738.478464000" observedRunningTime="2026-01-31 08:21:47.317263578 +0000 UTC m=+2739.171149977" watchObservedRunningTime="2026-01-31 08:21:47.328335713 +0000 UTC m=+2739.182222082" Jan 31 08:22:27 crc kubenswrapper[4826]: I0131 08:22:27.377294 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:22:27 crc kubenswrapper[4826]: I0131 08:22:27.377911 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:22:57 crc kubenswrapper[4826]: I0131 08:22:57.376772 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:22:57 crc kubenswrapper[4826]: I0131 08:22:57.377284 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:23:27 crc kubenswrapper[4826]: I0131 08:23:27.376917 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:23:27 crc kubenswrapper[4826]: I0131 08:23:27.377658 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:23:27 crc kubenswrapper[4826]: I0131 08:23:27.377725 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:23:27 crc kubenswrapper[4826]: I0131 08:23:27.378901 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de646e7eb20e44a926cca255169dcb2595a5e6b359b5609320726326f8a069e5"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:23:27 crc kubenswrapper[4826]: I0131 08:23:27.378999 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://de646e7eb20e44a926cca255169dcb2595a5e6b359b5609320726326f8a069e5" gracePeriod=600 Jan 31 08:23:28 crc kubenswrapper[4826]: I0131 08:23:28.173410 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="de646e7eb20e44a926cca255169dcb2595a5e6b359b5609320726326f8a069e5" exitCode=0 Jan 31 08:23:28 crc kubenswrapper[4826]: I0131 08:23:28.173501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"de646e7eb20e44a926cca255169dcb2595a5e6b359b5609320726326f8a069e5"} Jan 31 08:23:28 crc kubenswrapper[4826]: I0131 08:23:28.174006 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea"} Jan 31 08:23:28 crc kubenswrapper[4826]: I0131 08:23:28.174036 4826 scope.go:117] "RemoveContainer" containerID="4429222bf3d4f569e9c41884c938deefe5eb279c0a3297984f8955236857e1a8" Jan 31 08:24:07 crc kubenswrapper[4826]: I0131 08:24:07.531362 4826 generic.go:334] "Generic (PLEG): container finished" podID="e0375a71-69b8-4909-b359-f6c66a475f79" containerID="d20ffd474df2761887fa581c9f6aee96eb209ec0d30486d059df1a7bf9ca4ff2" exitCode=0 Jan 31 08:24:07 crc kubenswrapper[4826]: I0131 08:24:07.531873 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" event={"ID":"e0375a71-69b8-4909-b359-f6c66a475f79","Type":"ContainerDied","Data":"d20ffd474df2761887fa581c9f6aee96eb209ec0d30486d059df1a7bf9ca4ff2"} Jan 31 08:24:08 crc kubenswrapper[4826]: I0131 08:24:08.986819 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173279 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-ceph-nova-0\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173361 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-0\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173414 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ssh-key-openstack-edpm-ipam\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173455 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-nova-extra-config-0\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173478 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxkbm\" (UniqueName: \"kubernetes.io/projected/e0375a71-69b8-4909-b359-f6c66a475f79-kube-api-access-mxkbm\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173505 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-0\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173560 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-1\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173604 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-inventory\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173628 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-1\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ceph\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.173778 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-custom-ceph-combined-ca-bundle\") pod \"e0375a71-69b8-4909-b359-f6c66a475f79\" (UID: \"e0375a71-69b8-4909-b359-f6c66a475f79\") " Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.560193 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" event={"ID":"e0375a71-69b8-4909-b359-f6c66a475f79","Type":"ContainerDied","Data":"bba059fea3c3b502f884cb10bcf272a26bcce3b431a7a9cc5e9a014d350e00b1"} Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.560745 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba059fea3c3b502f884cb10bcf272a26bcce3b431a7a9cc5e9a014d350e00b1" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.560281 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.733429 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.733996 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.734051 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ceph" (OuterVolumeSpecName: "ceph") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.734070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0375a71-69b8-4909-b359-f6c66a475f79-kube-api-access-mxkbm" (OuterVolumeSpecName: "kube-api-access-mxkbm") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "kube-api-access-mxkbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.737725 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.737832 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.738428 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.739071 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.741367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-inventory" (OuterVolumeSpecName: "inventory") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.741388 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.742019 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0375a71-69b8-4909-b359-f6c66a475f79" (UID: "e0375a71-69b8-4909-b359-f6c66a475f79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785157 4826 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785199 4826 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785215 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785230 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785244 4826 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e0375a71-69b8-4909-b359-f6c66a475f79-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785255 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxkbm\" (UniqueName: \"kubernetes.io/projected/e0375a71-69b8-4909-b359-f6c66a475f79-kube-api-access-mxkbm\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785266 4826 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785278 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785292 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785303 4826 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:09 crc kubenswrapper[4826]: I0131 08:24:09.785314 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0375a71-69b8-4909-b359-f6c66a475f79-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.819287 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 08:24:23 crc kubenswrapper[4826]: E0131 08:24:23.820135 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0375a71-69b8-4909-b359-f6c66a475f79" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.820150 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0375a71-69b8-4909-b359-f6c66a475f79" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.820313 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0375a71-69b8-4909-b359-f6c66a475f79" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.821226 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.822711 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.823501 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.832015 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.899610 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.901245 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.903534 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.912111 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.944545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-run\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.944622 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.944652 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4319e2c7-04a1-4612-8efe-c656be3fd234-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.944690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.944761 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.945620 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.945673 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.945727 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.945797 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.945837 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.945986 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.946006 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.946067 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.946098 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.946115 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlr7m\" (UniqueName: \"kubernetes.io/projected/4319e2c7-04a1-4612-8efe-c656be3fd234-kube-api-access-dlr7m\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:23 crc kubenswrapper[4826]: I0131 08:24:23.946189 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049290 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-nvme\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049365 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-ceph\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049382 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-lib-modules\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049429 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-config-data-custom\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049449 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-run\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049485 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4319e2c7-04a1-4612-8efe-c656be3fd234-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049499 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049514 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049538 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049555 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049583 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049602 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-dev\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkbdp\" (UniqueName: \"kubernetes.io/projected/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-kube-api-access-nkbdp\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049666 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049682 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-run\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049700 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049725 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-config-data\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049748 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049765 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049798 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-scripts\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049819 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049833 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049846 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-sys\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049867 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049891 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049908 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049925 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.049942 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlr7m\" (UniqueName: \"kubernetes.io/projected/4319e2c7-04a1-4612-8efe-c656be3fd234-kube-api-access-dlr7m\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.050280 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.050329 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-run\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.050903 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051068 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051195 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051238 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051265 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051287 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.051516 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4319e2c7-04a1-4612-8efe-c656be3fd234-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.058858 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.059338 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.060619 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.068708 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4319e2c7-04a1-4612-8efe-c656be3fd234-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.075799 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlr7m\" (UniqueName: \"kubernetes.io/projected/4319e2c7-04a1-4612-8efe-c656be3fd234-kube-api-access-dlr7m\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.082473 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4319e2c7-04a1-4612-8efe-c656be3fd234-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4319e2c7-04a1-4612-8efe-c656be3fd234\") " pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.151727 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152312 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152411 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-dev\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152447 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkbdp\" (UniqueName: \"kubernetes.io/projected/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-kube-api-access-nkbdp\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152468 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-run\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152485 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-config-data\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152525 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-scripts\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152544 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-sys\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152566 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152567 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-dev\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152602 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152593 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152576 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-run\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152622 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152692 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-sys\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152631 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152825 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-nvme\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-ceph\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152877 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-lib-modules\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152952 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-config-data-custom\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.152985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-nvme\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.153094 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-lib-modules\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.153116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.156053 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-scripts\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.156154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.156621 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-config-data-custom\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.156814 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-ceph\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.157068 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.157751 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-config-data\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.177327 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkbdp\" (UniqueName: \"kubernetes.io/projected/aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421-kube-api-access-nkbdp\") pod \"cinder-backup-0\" (UID: \"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421\") " pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.217632 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.328658 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-6kb8c"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.330151 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.339217 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6kb8c"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.425563 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-1c25-account-create-update-xsw6h"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.427223 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.430959 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.436318 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-1c25-account-create-update-xsw6h"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.462068 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwnc\" (UniqueName: \"kubernetes.io/projected/d22c6c41-3d9d-4e39-b13c-c95542716ed2-kube-api-access-9cwnc\") pod \"manila-db-create-6kb8c\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.462159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d22c6c41-3d9d-4e39-b13c-c95542716ed2-operator-scripts\") pod \"manila-db-create-6kb8c\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.564023 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d22c6c41-3d9d-4e39-b13c-c95542716ed2-operator-scripts\") pod \"manila-db-create-6kb8c\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.564393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cwnc\" (UniqueName: \"kubernetes.io/projected/d22c6c41-3d9d-4e39-b13c-c95542716ed2-kube-api-access-9cwnc\") pod \"manila-db-create-6kb8c\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.564508 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-operator-scripts\") pod \"manila-1c25-account-create-update-xsw6h\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.564538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwd9p\" (UniqueName: \"kubernetes.io/projected/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-kube-api-access-hwd9p\") pod \"manila-1c25-account-create-update-xsw6h\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.564808 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d22c6c41-3d9d-4e39-b13c-c95542716ed2-operator-scripts\") pod \"manila-db-create-6kb8c\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.586842 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cwnc\" (UniqueName: \"kubernetes.io/projected/d22c6c41-3d9d-4e39-b13c-c95542716ed2-kube-api-access-9cwnc\") pod \"manila-db-create-6kb8c\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.655039 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.666516 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-operator-scripts\") pod \"manila-1c25-account-create-update-xsw6h\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.666565 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwd9p\" (UniqueName: \"kubernetes.io/projected/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-kube-api-access-hwd9p\") pod \"manila-1c25-account-create-update-xsw6h\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.667295 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-operator-scripts\") pod \"manila-1c25-account-create-update-xsw6h\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.684812 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwd9p\" (UniqueName: \"kubernetes.io/projected/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-kube-api-access-hwd9p\") pod \"manila-1c25-account-create-update-xsw6h\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.702080 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.703955 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.706762 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.706931 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.707157 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.707264 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vxvqq" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.714206 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.750782 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.765600 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.767424 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.771900 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fba9c8-da36-4b64-91f2-ff747c20bee6-logs\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772099 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772236 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-config-data\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772303 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36fba9c8-da36-4b64-91f2-ff747c20bee6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-scripts\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772429 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772477 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/36fba9c8-da36-4b64-91f2-ff747c20bee6-ceph\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772504 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngntl\" (UniqueName: \"kubernetes.io/projected/36fba9c8-da36-4b64-91f2-ff747c20bee6-kube-api-access-ngntl\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.772641 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.779697 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.780928 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.797526 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.865596 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.879940 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fba9c8-da36-4b64-91f2-ff747c20bee6-logs\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880038 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880112 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880133 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880157 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880222 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-config-data\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880258 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-logs\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880285 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36fba9c8-da36-4b64-91f2-ff747c20bee6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880304 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880329 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-scripts\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880356 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880411 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880445 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/36fba9c8-da36-4b64-91f2-ff747c20bee6-ceph\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880464 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngntl\" (UniqueName: \"kubernetes.io/projected/36fba9c8-da36-4b64-91f2-ff747c20bee6-kube-api-access-ngntl\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880487 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2rp\" (UniqueName: \"kubernetes.io/projected/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-kube-api-access-8b2rp\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880522 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.880581 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.881015 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.882649 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36fba9c8-da36-4b64-91f2-ff747c20bee6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.882954 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fba9c8-da36-4b64-91f2-ff747c20bee6-logs\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.888820 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.894286 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-scripts\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.894381 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-config-data\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.894892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.896233 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fba9c8-da36-4b64-91f2-ff747c20bee6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.897076 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/36fba9c8-da36-4b64-91f2-ff747c20bee6-ceph\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.910536 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngntl\" (UniqueName: \"kubernetes.io/projected/36fba9c8-da36-4b64-91f2-ff747c20bee6-kube-api-access-ngntl\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.921471 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"36fba9c8-da36-4b64-91f2-ff747c20bee6\") " pod="openstack/glance-default-external-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.944790 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 08:24:24 crc kubenswrapper[4826]: W0131 08:24:24.972244 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa50d3e2_0ae6_4ee1_ab02_82e4d51e5421.slice/crio-91726598ea670bd18e6cab03da87e7f206f798a34929e6cd5d84374c060f01be WatchSource:0}: Error finding container 91726598ea670bd18e6cab03da87e7f206f798a34929e6cd5d84374c060f01be: Status 404 returned error can't find the container with id 91726598ea670bd18e6cab03da87e7f206f798a34929e6cd5d84374c060f01be Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984305 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984352 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984401 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984449 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-logs\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984472 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984500 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2rp\" (UniqueName: \"kubernetes.io/projected/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-kube-api-access-8b2rp\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984546 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.984561 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.985183 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-logs\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.986188 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.989225 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.990199 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.990315 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.990578 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:24 crc kubenswrapper[4826]: I0131 08:24:24.995023 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.002050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2rp\" (UniqueName: \"kubernetes.io/projected/403c10ff-88fa-4845-aaed-36ccc5cf9dd2-kube-api-access-8b2rp\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.020108 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"403c10ff-88fa-4845-aaed-36ccc5cf9dd2\") " pod="openstack/glance-default-internal-api-0" Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.022579 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.141691 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.248044 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6kb8c"] Jan 31 08:24:25 crc kubenswrapper[4826]: W0131 08:24:25.250297 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd22c6c41_3d9d_4e39_b13c_c95542716ed2.slice/crio-a3a8793f31a11ffa1092c63000a5a8e08f03c7f7c732cb6fd3ddb8634a34bfd2 WatchSource:0}: Error finding container a3a8793f31a11ffa1092c63000a5a8e08f03c7f7c732cb6fd3ddb8634a34bfd2: Status 404 returned error can't find the container with id a3a8793f31a11ffa1092c63000a5a8e08f03c7f7c732cb6fd3ddb8634a34bfd2 Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.362758 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-1c25-account-create-update-xsw6h"] Jan 31 08:24:25 crc kubenswrapper[4826]: W0131 08:24:25.370887 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c6e6e62_07ea_463d_9b6e_b7980b0c51b1.slice/crio-3cd579993923ac3ad0e0a330339a7d0420ae044c2799eec1b2850e931752ffb4 WatchSource:0}: Error finding container 3cd579993923ac3ad0e0a330339a7d0420ae044c2799eec1b2850e931752ffb4: Status 404 returned error can't find the container with id 3cd579993923ac3ad0e0a330339a7d0420ae044c2799eec1b2850e931752ffb4 Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.593252 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 08:24:25 crc kubenswrapper[4826]: W0131 08:24:25.619917 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36fba9c8_da36_4b64_91f2_ff747c20bee6.slice/crio-4ff7b5ffe60fcc185c856d5fbd0141480dc8c8cde4f73b3aa0d8e8e7f9e23f52 WatchSource:0}: Error finding container 4ff7b5ffe60fcc185c856d5fbd0141480dc8c8cde4f73b3aa0d8e8e7f9e23f52: Status 404 returned error can't find the container with id 4ff7b5ffe60fcc185c856d5fbd0141480dc8c8cde4f73b3aa0d8e8e7f9e23f52 Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.716120 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421","Type":"ContainerStarted","Data":"91726598ea670bd18e6cab03da87e7f206f798a34929e6cd5d84374c060f01be"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.719720 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36fba9c8-da36-4b64-91f2-ff747c20bee6","Type":"ContainerStarted","Data":"4ff7b5ffe60fcc185c856d5fbd0141480dc8c8cde4f73b3aa0d8e8e7f9e23f52"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.721670 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1c25-account-create-update-xsw6h" event={"ID":"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1","Type":"ContainerStarted","Data":"a95f95a05bb4a61e2d1b613e0f74dd18a238357187d4185928a9f8bd1c78e574"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.721719 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1c25-account-create-update-xsw6h" event={"ID":"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1","Type":"ContainerStarted","Data":"3cd579993923ac3ad0e0a330339a7d0420ae044c2799eec1b2850e931752ffb4"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.725360 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4319e2c7-04a1-4612-8efe-c656be3fd234","Type":"ContainerStarted","Data":"d247d22ef5c86d30735254b1fa711e515a30bf9cd2573e2df6fe52e3a4c60aee"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.727390 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.731827 4826 generic.go:334] "Generic (PLEG): container finished" podID="d22c6c41-3d9d-4e39-b13c-c95542716ed2" containerID="20356d0e4ad5c77a600807d8cec552f292b2910997ef84f55c0742936f8d241c" exitCode=0 Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.731988 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6kb8c" event={"ID":"d22c6c41-3d9d-4e39-b13c-c95542716ed2","Type":"ContainerDied","Data":"20356d0e4ad5c77a600807d8cec552f292b2910997ef84f55c0742936f8d241c"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.732040 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6kb8c" event={"ID":"d22c6c41-3d9d-4e39-b13c-c95542716ed2","Type":"ContainerStarted","Data":"a3a8793f31a11ffa1092c63000a5a8e08f03c7f7c732cb6fd3ddb8634a34bfd2"} Jan 31 08:24:25 crc kubenswrapper[4826]: I0131 08:24:25.748958 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-1c25-account-create-update-xsw6h" podStartSLOduration=1.748934705 podStartE2EDuration="1.748934705s" podCreationTimestamp="2026-01-31 08:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:24:25.740000231 +0000 UTC m=+2897.593886600" watchObservedRunningTime="2026-01-31 08:24:25.748934705 +0000 UTC m=+2897.602821074" Jan 31 08:24:25 crc kubenswrapper[4826]: W0131 08:24:25.801007 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403c10ff_88fa_4845_aaed_36ccc5cf9dd2.slice/crio-dedbfc921e86f4b980c066d81078de799475e5749a87ab13c8064faa5bf2af94 WatchSource:0}: Error finding container dedbfc921e86f4b980c066d81078de799475e5749a87ab13c8064faa5bf2af94: Status 404 returned error can't find the container with id dedbfc921e86f4b980c066d81078de799475e5749a87ab13c8064faa5bf2af94 Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.748765 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421","Type":"ContainerStarted","Data":"688f49c0576eedd3e7fc03c57a781b333f0ad7bf27012449760efd5b5147c3e4"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.749312 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421","Type":"ContainerStarted","Data":"1f607660004845c38e456a69834122f4a0cb2eff0b859f5f8793f515c0886ab5"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.770832 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36fba9c8-da36-4b64-91f2-ff747c20bee6","Type":"ContainerStarted","Data":"bb2ad763163fd822d142918d3bff9bfc7d69458a49a8c82abea3b75d51f0ab57"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.778092 4826 generic.go:334] "Generic (PLEG): container finished" podID="4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" containerID="a95f95a05bb4a61e2d1b613e0f74dd18a238357187d4185928a9f8bd1c78e574" exitCode=0 Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.778204 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1c25-account-create-update-xsw6h" event={"ID":"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1","Type":"ContainerDied","Data":"a95f95a05bb4a61e2d1b613e0f74dd18a238357187d4185928a9f8bd1c78e574"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.780174 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.86187057 podStartE2EDuration="3.780162315s" podCreationTimestamp="2026-01-31 08:24:23 +0000 UTC" firstStartedPulling="2026-01-31 08:24:24.974249836 +0000 UTC m=+2896.828136195" lastFinishedPulling="2026-01-31 08:24:25.892541591 +0000 UTC m=+2897.746427940" observedRunningTime="2026-01-31 08:24:26.77648418 +0000 UTC m=+2898.630370549" watchObservedRunningTime="2026-01-31 08:24:26.780162315 +0000 UTC m=+2898.634048674" Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.814094 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4319e2c7-04a1-4612-8efe-c656be3fd234","Type":"ContainerStarted","Data":"0b1aaab97b0a449cfa6d738f944d204647735b4e4101f2d64da945b064e3053c"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.826195 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4319e2c7-04a1-4612-8efe-c656be3fd234","Type":"ContainerStarted","Data":"b373e0e241f02ceecddea12ede18feb9399919591adad19e7668d8020e17c55b"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.826322 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"403c10ff-88fa-4845-aaed-36ccc5cf9dd2","Type":"ContainerStarted","Data":"cbed22a32f94e6ce69861757405896eed606ef20fe9272a77e9729558535d519"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.826343 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"403c10ff-88fa-4845-aaed-36ccc5cf9dd2","Type":"ContainerStarted","Data":"dedbfc921e86f4b980c066d81078de799475e5749a87ab13c8064faa5bf2af94"} Jan 31 08:24:26 crc kubenswrapper[4826]: I0131 08:24:26.848019 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.060550623 podStartE2EDuration="3.848003065s" podCreationTimestamp="2026-01-31 08:24:23 +0000 UTC" firstStartedPulling="2026-01-31 08:24:24.888581419 +0000 UTC m=+2896.742467778" lastFinishedPulling="2026-01-31 08:24:25.676033851 +0000 UTC m=+2897.529920220" observedRunningTime="2026-01-31 08:24:26.847190882 +0000 UTC m=+2898.701077261" watchObservedRunningTime="2026-01-31 08:24:26.848003065 +0000 UTC m=+2898.701889424" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.210630 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.240449 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d22c6c41-3d9d-4e39-b13c-c95542716ed2-operator-scripts\") pod \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.240621 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cwnc\" (UniqueName: \"kubernetes.io/projected/d22c6c41-3d9d-4e39-b13c-c95542716ed2-kube-api-access-9cwnc\") pod \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\" (UID: \"d22c6c41-3d9d-4e39-b13c-c95542716ed2\") " Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.242987 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22c6c41-3d9d-4e39-b13c-c95542716ed2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d22c6c41-3d9d-4e39-b13c-c95542716ed2" (UID: "d22c6c41-3d9d-4e39-b13c-c95542716ed2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.254227 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22c6c41-3d9d-4e39-b13c-c95542716ed2-kube-api-access-9cwnc" (OuterVolumeSpecName: "kube-api-access-9cwnc") pod "d22c6c41-3d9d-4e39-b13c-c95542716ed2" (UID: "d22c6c41-3d9d-4e39-b13c-c95542716ed2"). InnerVolumeSpecName "kube-api-access-9cwnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.343291 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d22c6c41-3d9d-4e39-b13c-c95542716ed2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.343327 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cwnc\" (UniqueName: \"kubernetes.io/projected/d22c6c41-3d9d-4e39-b13c-c95542716ed2-kube-api-access-9cwnc\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.826557 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36fba9c8-da36-4b64-91f2-ff747c20bee6","Type":"ContainerStarted","Data":"9521f7d424824d6bf38c4f721f229375e35a5c1d1e84e639d18b2025edd3de4e"} Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.828516 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6kb8c" event={"ID":"d22c6c41-3d9d-4e39-b13c-c95542716ed2","Type":"ContainerDied","Data":"a3a8793f31a11ffa1092c63000a5a8e08f03c7f7c732cb6fd3ddb8634a34bfd2"} Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.828580 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6kb8c" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.828594 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a8793f31a11ffa1092c63000a5a8e08f03c7f7c732cb6fd3ddb8634a34bfd2" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.830324 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"403c10ff-88fa-4845-aaed-36ccc5cf9dd2","Type":"ContainerStarted","Data":"fac91d179a1812acc85ce829220d8a16048237a9e183cf310ca5f3d52af20d41"} Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.886352 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.886334687 podStartE2EDuration="4.886334687s" podCreationTimestamp="2026-01-31 08:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:24:27.856390535 +0000 UTC m=+2899.710276894" watchObservedRunningTime="2026-01-31 08:24:27.886334687 +0000 UTC m=+2899.740221046" Jan 31 08:24:27 crc kubenswrapper[4826]: I0131 08:24:27.888130 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.888118197 podStartE2EDuration="4.888118197s" podCreationTimestamp="2026-01-31 08:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:24:27.88541207 +0000 UTC m=+2899.739298429" watchObservedRunningTime="2026-01-31 08:24:27.888118197 +0000 UTC m=+2899.742004556" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.176115 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.277248 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwd9p\" (UniqueName: \"kubernetes.io/projected/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-kube-api-access-hwd9p\") pod \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.277462 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-operator-scripts\") pod \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\" (UID: \"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1\") " Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.278481 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" (UID: "4c6e6e62-07ea-463d-9b6e-b7980b0c51b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.288812 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-kube-api-access-hwd9p" (OuterVolumeSpecName: "kube-api-access-hwd9p") pod "4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" (UID: "4c6e6e62-07ea-463d-9b6e-b7980b0c51b1"). InnerVolumeSpecName "kube-api-access-hwd9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.378439 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.378479 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwd9p\" (UniqueName: \"kubernetes.io/projected/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1-kube-api-access-hwd9p\") on node \"crc\" DevicePath \"\"" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.840194 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1c25-account-create-update-xsw6h" event={"ID":"4c6e6e62-07ea-463d-9b6e-b7980b0c51b1","Type":"ContainerDied","Data":"3cd579993923ac3ad0e0a330339a7d0420ae044c2799eec1b2850e931752ffb4"} Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.840242 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd579993923ac3ad0e0a330339a7d0420ae044c2799eec1b2850e931752ffb4" Jan 31 08:24:28 crc kubenswrapper[4826]: I0131 08:24:28.840258 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1c25-account-create-update-xsw6h" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.158008 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.218419 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.697112 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-8br82"] Jan 31 08:24:29 crc kubenswrapper[4826]: E0131 08:24:29.697792 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22c6c41-3d9d-4e39-b13c-c95542716ed2" containerName="mariadb-database-create" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.697819 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22c6c41-3d9d-4e39-b13c-c95542716ed2" containerName="mariadb-database-create" Jan 31 08:24:29 crc kubenswrapper[4826]: E0131 08:24:29.697850 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" containerName="mariadb-account-create-update" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.697862 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" containerName="mariadb-account-create-update" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.698135 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" containerName="mariadb-account-create-update" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.698172 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22c6c41-3d9d-4e39-b13c-c95542716ed2" containerName="mariadb-database-create" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.699157 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.702094 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-9gkk7" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.702190 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.708846 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-8br82"] Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.802701 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-job-config-data\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.802763 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-combined-ca-bundle\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.803213 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lfm\" (UniqueName: \"kubernetes.io/projected/28dc0f2a-20c1-4913-880c-dd9c2046e096-kube-api-access-68lfm\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.803296 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-config-data\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.905517 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lfm\" (UniqueName: \"kubernetes.io/projected/28dc0f2a-20c1-4913-880c-dd9c2046e096-kube-api-access-68lfm\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.905587 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-config-data\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.905641 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-job-config-data\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.905678 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-combined-ca-bundle\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.914615 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-job-config-data\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.915950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-config-data\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.917082 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-combined-ca-bundle\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:29 crc kubenswrapper[4826]: I0131 08:24:29.937158 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lfm\" (UniqueName: \"kubernetes.io/projected/28dc0f2a-20c1-4913-880c-dd9c2046e096-kube-api-access-68lfm\") pod \"manila-db-sync-8br82\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " pod="openstack/manila-db-sync-8br82" Jan 31 08:24:30 crc kubenswrapper[4826]: I0131 08:24:30.030296 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8br82" Jan 31 08:24:30 crc kubenswrapper[4826]: I0131 08:24:30.571015 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-8br82"] Jan 31 08:24:30 crc kubenswrapper[4826]: I0131 08:24:30.857485 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8br82" event={"ID":"28dc0f2a-20c1-4913-880c-dd9c2046e096","Type":"ContainerStarted","Data":"58477fe001c2cc155f4a8ae6ff6dde27912939351e30701fb4842ef04f5581d5"} Jan 31 08:24:34 crc kubenswrapper[4826]: I0131 08:24:34.413584 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 31 08:24:34 crc kubenswrapper[4826]: I0131 08:24:34.450779 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.023878 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.023931 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.068213 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.075439 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.145188 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.145241 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.177785 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.185161 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.902990 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8br82" event={"ID":"28dc0f2a-20c1-4913-880c-dd9c2046e096","Type":"ContainerStarted","Data":"e9d5a2e49b1e4665b8404c72033ff7dbdd31bad6e9ec446a677837505ae10d9a"} Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.904278 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.904313 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.904326 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.904339 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:35 crc kubenswrapper[4826]: I0131 08:24:35.924310 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-8br82" podStartSLOduration=2.118867653 podStartE2EDuration="6.924288341s" podCreationTimestamp="2026-01-31 08:24:29 +0000 UTC" firstStartedPulling="2026-01-31 08:24:30.577888103 +0000 UTC m=+2902.431774462" lastFinishedPulling="2026-01-31 08:24:35.383308791 +0000 UTC m=+2907.237195150" observedRunningTime="2026-01-31 08:24:35.917510459 +0000 UTC m=+2907.771396818" watchObservedRunningTime="2026-01-31 08:24:35.924288341 +0000 UTC m=+2907.778174710" Jan 31 08:24:41 crc kubenswrapper[4826]: I0131 08:24:41.095960 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 08:24:41 crc kubenswrapper[4826]: I0131 08:24:41.096707 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 08:24:41 crc kubenswrapper[4826]: I0131 08:24:41.098403 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 08:24:41 crc kubenswrapper[4826]: I0131 08:24:41.100026 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 08:24:41 crc kubenswrapper[4826]: I0131 08:24:41.100135 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 08:24:41 crc kubenswrapper[4826]: I0131 08:24:41.105046 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 08:25:01 crc kubenswrapper[4826]: I0131 08:25:01.125423 4826 generic.go:334] "Generic (PLEG): container finished" podID="28dc0f2a-20c1-4913-880c-dd9c2046e096" containerID="e9d5a2e49b1e4665b8404c72033ff7dbdd31bad6e9ec446a677837505ae10d9a" exitCode=0 Jan 31 08:25:01 crc kubenswrapper[4826]: I0131 08:25:01.125503 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8br82" event={"ID":"28dc0f2a-20c1-4913-880c-dd9c2046e096","Type":"ContainerDied","Data":"e9d5a2e49b1e4665b8404c72033ff7dbdd31bad6e9ec446a677837505ae10d9a"} Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.544949 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8br82" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.582024 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-combined-ca-bundle\") pod \"28dc0f2a-20c1-4913-880c-dd9c2046e096\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.582165 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-job-config-data\") pod \"28dc0f2a-20c1-4913-880c-dd9c2046e096\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.582246 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-config-data\") pod \"28dc0f2a-20c1-4913-880c-dd9c2046e096\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.582319 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68lfm\" (UniqueName: \"kubernetes.io/projected/28dc0f2a-20c1-4913-880c-dd9c2046e096-kube-api-access-68lfm\") pod \"28dc0f2a-20c1-4913-880c-dd9c2046e096\" (UID: \"28dc0f2a-20c1-4913-880c-dd9c2046e096\") " Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.588214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "28dc0f2a-20c1-4913-880c-dd9c2046e096" (UID: "28dc0f2a-20c1-4913-880c-dd9c2046e096"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.591260 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-config-data" (OuterVolumeSpecName: "config-data") pod "28dc0f2a-20c1-4913-880c-dd9c2046e096" (UID: "28dc0f2a-20c1-4913-880c-dd9c2046e096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.592065 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28dc0f2a-20c1-4913-880c-dd9c2046e096-kube-api-access-68lfm" (OuterVolumeSpecName: "kube-api-access-68lfm") pod "28dc0f2a-20c1-4913-880c-dd9c2046e096" (UID: "28dc0f2a-20c1-4913-880c-dd9c2046e096"). InnerVolumeSpecName "kube-api-access-68lfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.612327 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28dc0f2a-20c1-4913-880c-dd9c2046e096" (UID: "28dc0f2a-20c1-4913-880c-dd9c2046e096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.684609 4826 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.684642 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.684651 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68lfm\" (UniqueName: \"kubernetes.io/projected/28dc0f2a-20c1-4913-880c-dd9c2046e096-kube-api-access-68lfm\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:02 crc kubenswrapper[4826]: I0131 08:25:02.684661 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dc0f2a-20c1-4913-880c-dd9c2046e096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.151357 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8br82" event={"ID":"28dc0f2a-20c1-4913-880c-dd9c2046e096","Type":"ContainerDied","Data":"58477fe001c2cc155f4a8ae6ff6dde27912939351e30701fb4842ef04f5581d5"} Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.151427 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58477fe001c2cc155f4a8ae6ff6dde27912939351e30701fb4842ef04f5581d5" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.151463 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8br82" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.444201 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:03 crc kubenswrapper[4826]: E0131 08:25:03.445317 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dc0f2a-20c1-4913-880c-dd9c2046e096" containerName="manila-db-sync" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.445429 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dc0f2a-20c1-4913-880c-dd9c2046e096" containerName="manila-db-sync" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.445734 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="28dc0f2a-20c1-4913-880c-dd9c2046e096" containerName="manila-db-sync" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.446870 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.457068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.462229 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.462230 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.462570 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-9gkk7" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.473730 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.475644 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.486989 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.505039 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525450 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525568 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525616 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-scripts\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525658 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-scripts\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525718 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hsb\" (UniqueName: \"kubernetes.io/projected/584f29ed-f9cf-49fd-9743-8cca3fe557ce-kube-api-access-c4hsb\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525757 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525784 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525836 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/584f29ed-f9cf-49fd-9743-8cca3fe557ce-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525907 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.525947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.526046 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-ceph\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.526093 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.526114 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.527988 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgshx\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-kube-api-access-fgshx\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.556988 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.615535 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-th7rb"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.617265 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.627215 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-th7rb"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/584f29ed-f9cf-49fd-9743-8cca3fe557ce-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637245 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/584f29ed-f9cf-49fd-9743-8cca3fe557ce-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637476 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637593 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637712 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-ceph\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637809 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.637910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgshx\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-kube-api-access-fgshx\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638239 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638390 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638499 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-scripts\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638610 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-scripts\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638728 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4hsb\" (UniqueName: \"kubernetes.io/projected/584f29ed-f9cf-49fd-9743-8cca3fe557ce-kube-api-access-c4hsb\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638827 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.638932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.639262 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.644077 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.644431 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.648312 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.650052 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-ceph\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.650751 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.652417 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.653087 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-scripts\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.655121 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-scripts\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.655524 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.669922 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.676159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgshx\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-kube-api-access-fgshx\") pod \"manila-share-share1-0\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.680676 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4hsb\" (UniqueName: \"kubernetes.io/projected/584f29ed-f9cf-49fd-9743-8cca3fe557ce-kube-api-access-c4hsb\") pod \"manila-scheduler-0\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.723513 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.726251 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.728813 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.741570 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.741882 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-config\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.742140 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.742252 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.742359 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.742456 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6ls\" (UniqueName: \"kubernetes.io/projected/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-kube-api-access-vs6ls\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.753996 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.778057 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.811207 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845267 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845332 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845367 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6ls\" (UniqueName: \"kubernetes.io/projected/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-kube-api-access-vs6ls\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845408 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845427 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mh4\" (UniqueName: \"kubernetes.io/projected/e167025d-9e3a-4ae4-bf29-b549a230052d-kube-api-access-48mh4\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845490 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-scripts\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845549 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e167025d-9e3a-4ae4-bf29-b549a230052d-logs\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845589 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e167025d-9e3a-4ae4-bf29-b549a230052d-etc-machine-id\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-config\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845661 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845719 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data-custom\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.845777 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.846545 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.846677 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.847258 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.847722 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-config\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.848618 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.865945 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6ls\" (UniqueName: \"kubernetes.io/projected/bc40b194-0220-45aa-8ddb-cd77f5a0cafb-kube-api-access-vs6ls\") pod \"dnsmasq-dns-69655fd4bf-th7rb\" (UID: \"bc40b194-0220-45aa-8ddb-cd77f5a0cafb\") " pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.946670 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.947586 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.947624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mh4\" (UniqueName: \"kubernetes.io/projected/e167025d-9e3a-4ae4-bf29-b549a230052d-kube-api-access-48mh4\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.947693 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-scripts\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.956099 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e167025d-9e3a-4ae4-bf29-b549a230052d-logs\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.956178 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e167025d-9e3a-4ae4-bf29-b549a230052d-etc-machine-id\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.956287 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.956409 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data-custom\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.958304 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e167025d-9e3a-4ae4-bf29-b549a230052d-etc-machine-id\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.958561 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e167025d-9e3a-4ae4-bf29-b549a230052d-logs\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.958688 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.958944 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-scripts\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.960505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data-custom\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.961658 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:03 crc kubenswrapper[4826]: I0131 08:25:03.974040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mh4\" (UniqueName: \"kubernetes.io/projected/e167025d-9e3a-4ae4-bf29-b549a230052d-kube-api-access-48mh4\") pod \"manila-api-0\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " pod="openstack/manila-api-0" Jan 31 08:25:04 crc kubenswrapper[4826]: I0131 08:25:04.063999 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:25:04 crc kubenswrapper[4826]: I0131 08:25:04.260359 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:04 crc kubenswrapper[4826]: I0131 08:25:04.277287 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:04 crc kubenswrapper[4826]: W0131 08:25:04.287545 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod584f29ed_f9cf_49fd_9743_8cca3fe557ce.slice/crio-46856d2862c2b594acb2e333c6bf823c3d4b6bef66fd7bd24bcd7342bf856de8 WatchSource:0}: Error finding container 46856d2862c2b594acb2e333c6bf823c3d4b6bef66fd7bd24bcd7342bf856de8: Status 404 returned error can't find the container with id 46856d2862c2b594acb2e333c6bf823c3d4b6bef66fd7bd24bcd7342bf856de8 Jan 31 08:25:04 crc kubenswrapper[4826]: W0131 08:25:04.287827 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod646e0310_8e32_4f52_9441_e2e2ce66ed75.slice/crio-880bd9c763faeaddc3d5cf60817f6fcaa65e3a19f0fb7a3a2d582f50457bf4c3 WatchSource:0}: Error finding container 880bd9c763faeaddc3d5cf60817f6fcaa65e3a19f0fb7a3a2d582f50457bf4c3: Status 404 returned error can't find the container with id 880bd9c763faeaddc3d5cf60817f6fcaa65e3a19f0fb7a3a2d582f50457bf4c3 Jan 31 08:25:04 crc kubenswrapper[4826]: I0131 08:25:04.561340 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-th7rb"] Jan 31 08:25:04 crc kubenswrapper[4826]: W0131 08:25:04.565180 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc40b194_0220_45aa_8ddb_cd77f5a0cafb.slice/crio-846e9d78d2ca6fae433537eade07b6fffd59025ade4dc4562a6b512d0df2ddd9 WatchSource:0}: Error finding container 846e9d78d2ca6fae433537eade07b6fffd59025ade4dc4562a6b512d0df2ddd9: Status 404 returned error can't find the container with id 846e9d78d2ca6fae433537eade07b6fffd59025ade4dc4562a6b512d0df2ddd9 Jan 31 08:25:04 crc kubenswrapper[4826]: I0131 08:25:04.833533 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:05 crc kubenswrapper[4826]: I0131 08:25:05.199378 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e167025d-9e3a-4ae4-bf29-b549a230052d","Type":"ContainerStarted","Data":"926b1a25f43065375ff9c57afa324d65cf42574200efa1701de53a4ce29430e1"} Jan 31 08:25:05 crc kubenswrapper[4826]: I0131 08:25:05.206005 4826 generic.go:334] "Generic (PLEG): container finished" podID="bc40b194-0220-45aa-8ddb-cd77f5a0cafb" containerID="a011e48e2cb2aa1cb1fc1d4910d973d90086e00ac28133b90ffc3a401dced80c" exitCode=0 Jan 31 08:25:05 crc kubenswrapper[4826]: I0131 08:25:05.206088 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" event={"ID":"bc40b194-0220-45aa-8ddb-cd77f5a0cafb","Type":"ContainerDied","Data":"a011e48e2cb2aa1cb1fc1d4910d973d90086e00ac28133b90ffc3a401dced80c"} Jan 31 08:25:05 crc kubenswrapper[4826]: I0131 08:25:05.206125 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" event={"ID":"bc40b194-0220-45aa-8ddb-cd77f5a0cafb","Type":"ContainerStarted","Data":"846e9d78d2ca6fae433537eade07b6fffd59025ade4dc4562a6b512d0df2ddd9"} Jan 31 08:25:05 crc kubenswrapper[4826]: I0131 08:25:05.209389 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"584f29ed-f9cf-49fd-9743-8cca3fe557ce","Type":"ContainerStarted","Data":"46856d2862c2b594acb2e333c6bf823c3d4b6bef66fd7bd24bcd7342bf856de8"} Jan 31 08:25:05 crc kubenswrapper[4826]: I0131 08:25:05.218208 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"646e0310-8e32-4f52-9441-e2e2ce66ed75","Type":"ContainerStarted","Data":"880bd9c763faeaddc3d5cf60817f6fcaa65e3a19f0fb7a3a2d582f50457bf4c3"} Jan 31 08:25:06 crc kubenswrapper[4826]: I0131 08:25:06.238643 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e167025d-9e3a-4ae4-bf29-b549a230052d","Type":"ContainerStarted","Data":"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e"} Jan 31 08:25:06 crc kubenswrapper[4826]: I0131 08:25:06.247259 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" event={"ID":"bc40b194-0220-45aa-8ddb-cd77f5a0cafb","Type":"ContainerStarted","Data":"672af1e14a92e12a400cdf2f2d4dc8e009119a3d775e091d685b97e22f962ba1"} Jan 31 08:25:06 crc kubenswrapper[4826]: I0131 08:25:06.248070 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:06 crc kubenswrapper[4826]: I0131 08:25:06.318939 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" podStartSLOduration=3.318920239 podStartE2EDuration="3.318920239s" podCreationTimestamp="2026-01-31 08:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:25:06.317089316 +0000 UTC m=+2938.170975685" watchObservedRunningTime="2026-01-31 08:25:06.318920239 +0000 UTC m=+2938.172806598" Jan 31 08:25:06 crc kubenswrapper[4826]: I0131 08:25:06.347218 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.262507 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e167025d-9e3a-4ae4-bf29-b549a230052d","Type":"ContainerStarted","Data":"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335"} Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.262829 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api-log" containerID="cri-o://29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e" gracePeriod=30 Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.263212 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.263472 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api" containerID="cri-o://06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335" gracePeriod=30 Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.280041 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"584f29ed-f9cf-49fd-9743-8cca3fe557ce","Type":"ContainerStarted","Data":"b80e53fa342e5a70b50dc5bb1da0b9e612b5c46f6a4e28f92bf3a2dcdb7608a0"} Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.283024 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"584f29ed-f9cf-49fd-9743-8cca3fe557ce","Type":"ContainerStarted","Data":"59afa85e87c188041efbbaf237069d099f424f7a8fc222ed13e6bf2e2249ab7d"} Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.298683 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.298657442 podStartE2EDuration="4.298657442s" podCreationTimestamp="2026-01-31 08:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:25:07.28324156 +0000 UTC m=+2939.137127919" watchObservedRunningTime="2026-01-31 08:25:07.298657442 +0000 UTC m=+2939.152543801" Jan 31 08:25:07 crc kubenswrapper[4826]: I0131 08:25:07.326410 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.989883802 podStartE2EDuration="4.326386028s" podCreationTimestamp="2026-01-31 08:25:03 +0000 UTC" firstStartedPulling="2026-01-31 08:25:04.29275601 +0000 UTC m=+2936.146642369" lastFinishedPulling="2026-01-31 08:25:05.629258236 +0000 UTC m=+2937.483144595" observedRunningTime="2026-01-31 08:25:07.320445327 +0000 UTC m=+2939.174331696" watchObservedRunningTime="2026-01-31 08:25:07.326386028 +0000 UTC m=+2939.180272387" Jan 31 08:25:07 crc kubenswrapper[4826]: E0131 08:25:07.465350 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode167025d_9e3a_4ae4_bf29_b549a230052d.slice/crio-29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e.scope\": RecentStats: unable to find data in memory cache]" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.036636 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.164421 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e167025d-9e3a-4ae4-bf29-b549a230052d-etc-machine-id\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.164524 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e167025d-9e3a-4ae4-bf29-b549a230052d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.164716 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-scripts\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.165830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48mh4\" (UniqueName: \"kubernetes.io/projected/e167025d-9e3a-4ae4-bf29-b549a230052d-kube-api-access-48mh4\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.165910 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-combined-ca-bundle\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.166129 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.166167 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e167025d-9e3a-4ae4-bf29-b549a230052d-logs\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.166194 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data-custom\") pod \"e167025d-9e3a-4ae4-bf29-b549a230052d\" (UID: \"e167025d-9e3a-4ae4-bf29-b549a230052d\") " Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.166547 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e167025d-9e3a-4ae4-bf29-b549a230052d-logs" (OuterVolumeSpecName: "logs") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.167290 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e167025d-9e3a-4ae4-bf29-b549a230052d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.167318 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e167025d-9e3a-4ae4-bf29-b549a230052d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.172217 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-scripts" (OuterVolumeSpecName: "scripts") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.174604 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.175129 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e167025d-9e3a-4ae4-bf29-b549a230052d-kube-api-access-48mh4" (OuterVolumeSpecName: "kube-api-access-48mh4") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "kube-api-access-48mh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.201464 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.244719 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data" (OuterVolumeSpecName: "config-data") pod "e167025d-9e3a-4ae4-bf29-b549a230052d" (UID: "e167025d-9e3a-4ae4-bf29-b549a230052d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.269300 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48mh4\" (UniqueName: \"kubernetes.io/projected/e167025d-9e3a-4ae4-bf29-b549a230052d-kube-api-access-48mh4\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.269337 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.269346 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.269357 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.269365 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e167025d-9e3a-4ae4-bf29-b549a230052d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.291383 4826 generic.go:334] "Generic (PLEG): container finished" podID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerID="06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335" exitCode=0 Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.291416 4826 generic.go:334] "Generic (PLEG): container finished" podID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerID="29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e" exitCode=143 Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.292393 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.296222 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e167025d-9e3a-4ae4-bf29-b549a230052d","Type":"ContainerDied","Data":"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335"} Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.296292 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e167025d-9e3a-4ae4-bf29-b549a230052d","Type":"ContainerDied","Data":"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e"} Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.296309 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e167025d-9e3a-4ae4-bf29-b549a230052d","Type":"ContainerDied","Data":"926b1a25f43065375ff9c57afa324d65cf42574200efa1701de53a4ce29430e1"} Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.296334 4826 scope.go:117] "RemoveContainer" containerID="06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.357400 4826 scope.go:117] "RemoveContainer" containerID="29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.374480 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.388723 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.399372 4826 scope.go:117] "RemoveContainer" containerID="06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335" Jan 31 08:25:08 crc kubenswrapper[4826]: E0131 08:25:08.399855 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335\": container with ID starting with 06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335 not found: ID does not exist" containerID="06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.399916 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335"} err="failed to get container status \"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335\": rpc error: code = NotFound desc = could not find container \"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335\": container with ID starting with 06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335 not found: ID does not exist" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.399954 4826 scope.go:117] "RemoveContainer" containerID="29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e" Jan 31 08:25:08 crc kubenswrapper[4826]: E0131 08:25:08.400214 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e\": container with ID starting with 29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e not found: ID does not exist" containerID="29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.400264 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e"} err="failed to get container status \"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e\": rpc error: code = NotFound desc = could not find container \"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e\": container with ID starting with 29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e not found: ID does not exist" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.400281 4826 scope.go:117] "RemoveContainer" containerID="06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.400503 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335"} err="failed to get container status \"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335\": rpc error: code = NotFound desc = could not find container \"06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335\": container with ID starting with 06f0ba92ce2db09fb57ef0ae35a92f7fa6db1645c7b36c7be4afe16338227335 not found: ID does not exist" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.400531 4826 scope.go:117] "RemoveContainer" containerID="29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.401067 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e"} err="failed to get container status \"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e\": rpc error: code = NotFound desc = could not find container \"29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e\": container with ID starting with 29994aee654320a155380f4535f82d99f1afb880bf1ec5ea767f3703cb55e21e not found: ID does not exist" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.405712 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:08 crc kubenswrapper[4826]: E0131 08:25:08.406228 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api-log" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.406254 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api-log" Jan 31 08:25:08 crc kubenswrapper[4826]: E0131 08:25:08.406276 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.406284 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.406495 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.406529 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" containerName="manila-api-log" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.407927 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.420920 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.421301 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.421510 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.426227 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473512 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473548 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9ee77c-ee1a-48cf-973f-21437b0df988-logs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473695 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-config-data\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473728 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-scripts\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473845 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b9ee77c-ee1a-48cf-973f-21437b0df988-etc-machine-id\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473898 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-public-tls-certs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.473948 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-config-data-custom\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.474041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v66z\" (UniqueName: \"kubernetes.io/projected/0b9ee77c-ee1a-48cf-973f-21437b0df988-kube-api-access-8v66z\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576146 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9ee77c-ee1a-48cf-973f-21437b0df988-logs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-config-data\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576294 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-scripts\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576361 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b9ee77c-ee1a-48cf-973f-21437b0df988-etc-machine-id\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576383 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-public-tls-certs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-config-data-custom\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576439 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v66z\" (UniqueName: \"kubernetes.io/projected/0b9ee77c-ee1a-48cf-973f-21437b0df988-kube-api-access-8v66z\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576473 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.576750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9ee77c-ee1a-48cf-973f-21437b0df988-logs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.577011 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b9ee77c-ee1a-48cf-973f-21437b0df988-etc-machine-id\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.585023 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.585193 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-config-data\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.585466 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.585549 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-public-tls-certs\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.586072 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-scripts\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.587642 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b9ee77c-ee1a-48cf-973f-21437b0df988-config-data-custom\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.597560 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v66z\" (UniqueName: \"kubernetes.io/projected/0b9ee77c-ee1a-48cf-973f-21437b0df988-kube-api-access-8v66z\") pod \"manila-api-0\" (UID: \"0b9ee77c-ee1a-48cf-973f-21437b0df988\") " pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.757403 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 08:25:08 crc kubenswrapper[4826]: I0131 08:25:08.851167 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e167025d-9e3a-4ae4-bf29-b549a230052d" path="/var/lib/kubelet/pods/e167025d-9e3a-4ae4-bf29-b549a230052d/volumes" Jan 31 08:25:09 crc kubenswrapper[4826]: I0131 08:25:09.412821 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:09 crc kubenswrapper[4826]: I0131 08:25:09.414058 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-central-agent" containerID="cri-o://04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590" gracePeriod=30 Jan 31 08:25:09 crc kubenswrapper[4826]: I0131 08:25:09.414100 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="proxy-httpd" containerID="cri-o://06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa" gracePeriod=30 Jan 31 08:25:09 crc kubenswrapper[4826]: I0131 08:25:09.414175 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-notification-agent" containerID="cri-o://ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266" gracePeriod=30 Jan 31 08:25:09 crc kubenswrapper[4826]: I0131 08:25:09.414208 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="sg-core" containerID="cri-o://732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d" gracePeriod=30 Jan 31 08:25:09 crc kubenswrapper[4826]: I0131 08:25:09.431939 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.318776 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0b9ee77c-ee1a-48cf-973f-21437b0df988","Type":"ContainerStarted","Data":"9756133e4dc0f1e6942dcc061465a9a9c9ab62b453e32dc4439db0258b97f4b9"} Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.319434 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0b9ee77c-ee1a-48cf-973f-21437b0df988","Type":"ContainerStarted","Data":"f01c362a1e0d9b5b242a90fdcc410f57c56eb5f6d93e814cd238bbac230a69fe"} Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.326809 4826 generic.go:334] "Generic (PLEG): container finished" podID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerID="06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa" exitCode=0 Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.326839 4826 generic.go:334] "Generic (PLEG): container finished" podID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerID="732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d" exitCode=2 Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.326846 4826 generic.go:334] "Generic (PLEG): container finished" podID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerID="04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590" exitCode=0 Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.326867 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerDied","Data":"06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa"} Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.326895 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerDied","Data":"732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d"} Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.326905 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerDied","Data":"04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590"} Jan 31 08:25:10 crc kubenswrapper[4826]: I0131 08:25:10.513888 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.190:3000/\": dial tcp 10.217.0.190:3000: connect: connection refused" Jan 31 08:25:11 crc kubenswrapper[4826]: I0131 08:25:11.344784 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0b9ee77c-ee1a-48cf-973f-21437b0df988","Type":"ContainerStarted","Data":"a8b09be9004c2533488a532fea1866c78d513356a107120321fa5858f4722e30"} Jan 31 08:25:11 crc kubenswrapper[4826]: I0131 08:25:11.347063 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 08:25:11 crc kubenswrapper[4826]: I0131 08:25:11.371059 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.371030995 podStartE2EDuration="3.371030995s" podCreationTimestamp="2026-01-31 08:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:25:11.364288672 +0000 UTC m=+2943.218175031" watchObservedRunningTime="2026-01-31 08:25:11.371030995 +0000 UTC m=+2943.224917354" Jan 31 08:25:13 crc kubenswrapper[4826]: I0131 08:25:13.811906 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 08:25:13 crc kubenswrapper[4826]: I0131 08:25:13.949248 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-th7rb" Jan 31 08:25:14 crc kubenswrapper[4826]: I0131 08:25:14.017080 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-gx74t"] Jan 31 08:25:14 crc kubenswrapper[4826]: I0131 08:25:14.017343 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerName="dnsmasq-dns" containerID="cri-o://a6499043cdb97e37d2148548f8ff6b35d301c622a028496dbb35f78191041d8a" gracePeriod=10 Jan 31 08:25:14 crc kubenswrapper[4826]: I0131 08:25:14.373829 4826 generic.go:334] "Generic (PLEG): container finished" podID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerID="a6499043cdb97e37d2148548f8ff6b35d301c622a028496dbb35f78191041d8a" exitCode=0 Jan 31 08:25:14 crc kubenswrapper[4826]: I0131 08:25:14.373879 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" event={"ID":"a5c6247d-41c2-41c0-9cf4-17098b60970a","Type":"ContainerDied","Data":"a6499043cdb97e37d2148548f8ff6b35d301c622a028496dbb35f78191041d8a"} Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.165022 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.174553 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.220773 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-sb\") pod \"a5c6247d-41c2-41c0-9cf4-17098b60970a\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.220887 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-config\") pod \"a5c6247d-41c2-41c0-9cf4-17098b60970a\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.220910 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-openstack-edpm-ipam\") pod \"a5c6247d-41c2-41c0-9cf4-17098b60970a\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.221087 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7xc\" (UniqueName: \"kubernetes.io/projected/a5c6247d-41c2-41c0-9cf4-17098b60970a-kube-api-access-ht7xc\") pod \"a5c6247d-41c2-41c0-9cf4-17098b60970a\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.221118 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-dns-svc\") pod \"a5c6247d-41c2-41c0-9cf4-17098b60970a\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.221132 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-nb\") pod \"a5c6247d-41c2-41c0-9cf4-17098b60970a\" (UID: \"a5c6247d-41c2-41c0-9cf4-17098b60970a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.244027 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c6247d-41c2-41c0-9cf4-17098b60970a-kube-api-access-ht7xc" (OuterVolumeSpecName: "kube-api-access-ht7xc") pod "a5c6247d-41c2-41c0-9cf4-17098b60970a" (UID: "a5c6247d-41c2-41c0-9cf4-17098b60970a"). InnerVolumeSpecName "kube-api-access-ht7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.283568 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5c6247d-41c2-41c0-9cf4-17098b60970a" (UID: "a5c6247d-41c2-41c0-9cf4-17098b60970a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.298541 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a5c6247d-41c2-41c0-9cf4-17098b60970a" (UID: "a5c6247d-41c2-41c0-9cf4-17098b60970a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.311409 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5c6247d-41c2-41c0-9cf4-17098b60970a" (UID: "a5c6247d-41c2-41c0-9cf4-17098b60970a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323113 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-run-httpd\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323171 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6ts\" (UniqueName: \"kubernetes.io/projected/10c57bd7-46e8-437c-9e84-e77bcdf5561a-kube-api-access-rf6ts\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323264 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-ceilometer-tls-certs\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323309 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-scripts\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323493 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-sg-core-conf-yaml\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323611 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-log-httpd\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323671 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-combined-ca-bundle\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323678 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.323690 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-config-data\") pod \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\" (UID: \"10c57bd7-46e8-437c-9e84-e77bcdf5561a\") " Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324075 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324756 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324771 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324781 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324790 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324799 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c57bd7-46e8-437c-9e84-e77bcdf5561a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.324807 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7xc\" (UniqueName: \"kubernetes.io/projected/a5c6247d-41c2-41c0-9cf4-17098b60970a-kube-api-access-ht7xc\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.327526 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-config" (OuterVolumeSpecName: "config") pod "a5c6247d-41c2-41c0-9cf4-17098b60970a" (UID: "a5c6247d-41c2-41c0-9cf4-17098b60970a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.328402 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c57bd7-46e8-437c-9e84-e77bcdf5561a-kube-api-access-rf6ts" (OuterVolumeSpecName: "kube-api-access-rf6ts") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "kube-api-access-rf6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.329188 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-scripts" (OuterVolumeSpecName: "scripts") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.331338 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5c6247d-41c2-41c0-9cf4-17098b60970a" (UID: "a5c6247d-41c2-41c0-9cf4-17098b60970a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.369405 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.386061 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.400667 4826 generic.go:334] "Generic (PLEG): container finished" podID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerID="ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266" exitCode=0 Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.400743 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerDied","Data":"ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266"} Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.400773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10c57bd7-46e8-437c-9e84-e77bcdf5561a","Type":"ContainerDied","Data":"50948bf523f3d51e8d2e716dd6ce3e63f35026a8e8084e6b128c01f5627b17a5"} Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.400792 4826 scope.go:117] "RemoveContainer" containerID="06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.401050 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.414162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" event={"ID":"a5c6247d-41c2-41c0-9cf4-17098b60970a","Type":"ContainerDied","Data":"11ff34db31af395cb78fc920aa9587f4376969fa2d87776cfa789e9908afc2eb"} Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.414224 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-gx74t" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.430544 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.430583 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf6ts\" (UniqueName: \"kubernetes.io/projected/10c57bd7-46e8-437c-9e84-e77bcdf5561a-kube-api-access-rf6ts\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.430597 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.430610 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.430620 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.430632 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5c6247d-41c2-41c0-9cf4-17098b60970a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.446003 4826 scope.go:117] "RemoveContainer" containerID="732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.458321 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.463355 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-gx74t"] Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.469846 4826 scope.go:117] "RemoveContainer" containerID="ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.473146 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-config-data" (OuterVolumeSpecName: "config-data") pod "10c57bd7-46e8-437c-9e84-e77bcdf5561a" (UID: "10c57bd7-46e8-437c-9e84-e77bcdf5561a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.477522 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-gx74t"] Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.489750 4826 scope.go:117] "RemoveContainer" containerID="04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.509782 4826 scope.go:117] "RemoveContainer" containerID="06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.510475 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa\": container with ID starting with 06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa not found: ID does not exist" containerID="06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.510521 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa"} err="failed to get container status \"06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa\": rpc error: code = NotFound desc = could not find container \"06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa\": container with ID starting with 06cc6e78be576b99e8422a70db7c7aecebe23257abf3be3d2b3a03b0dad0cfaa not found: ID does not exist" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.510547 4826 scope.go:117] "RemoveContainer" containerID="732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.510906 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d\": container with ID starting with 732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d not found: ID does not exist" containerID="732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.510937 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d"} err="failed to get container status \"732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d\": rpc error: code = NotFound desc = could not find container \"732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d\": container with ID starting with 732a5f640290317f2bfe781c47445870be3dd7bd088e084530d5376769bfcd5d not found: ID does not exist" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.510954 4826 scope.go:117] "RemoveContainer" containerID="ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.511385 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266\": container with ID starting with ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266 not found: ID does not exist" containerID="ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.511422 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266"} err="failed to get container status \"ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266\": rpc error: code = NotFound desc = could not find container \"ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266\": container with ID starting with ed0ca9100764a82c0b742628b7a28d36eccbc8ec9e8c60e380574ab46bd8b266 not found: ID does not exist" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.511445 4826 scope.go:117] "RemoveContainer" containerID="04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.511672 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590\": container with ID starting with 04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590 not found: ID does not exist" containerID="04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.511705 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590"} err="failed to get container status \"04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590\": rpc error: code = NotFound desc = could not find container \"04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590\": container with ID starting with 04a68fc804cfe2f5ebc5ae7a687cc835d7be7fca1f21bdae3ef4e6295bb0c590 not found: ID does not exist" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.511723 4826 scope.go:117] "RemoveContainer" containerID="a6499043cdb97e37d2148548f8ff6b35d301c622a028496dbb35f78191041d8a" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.531911 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.531940 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c57bd7-46e8-437c-9e84-e77bcdf5561a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.539837 4826 scope.go:117] "RemoveContainer" containerID="f0420167db84f8f5dcd0d44e64e7bde51528e638bd33e15a0540f081c0e4d385" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.751094 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.770089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.780561 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.781102 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="proxy-httpd" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781129 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="proxy-httpd" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.781149 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerName="dnsmasq-dns" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781160 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerName="dnsmasq-dns" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.781174 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-notification-agent" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781183 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-notification-agent" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.781202 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="sg-core" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781209 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="sg-core" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.781237 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-central-agent" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781244 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-central-agent" Jan 31 08:25:15 crc kubenswrapper[4826]: E0131 08:25:15.781253 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerName="init" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781261 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerName="init" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781474 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-notification-agent" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781495 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="sg-core" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781521 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="ceilometer-central-agent" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781532 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" containerName="dnsmasq-dns" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.781553 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" containerName="proxy-httpd" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.783693 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.789339 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.789728 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.789948 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.798592 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.972884 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-run-httpd\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.972930 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.972990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.973007 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8ws\" (UniqueName: \"kubernetes.io/projected/81245600-bfa0-4e8f-b194-3298c15da964-kube-api-access-pw8ws\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.973061 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.973085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-scripts\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.973112 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-log-httpd\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:15 crc kubenswrapper[4826]: I0131 08:25:15.973148 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-config-data\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.074581 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076057 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-scripts\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076232 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-log-httpd\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076394 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-config-data\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-run-httpd\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076710 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076813 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076933 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-log-httpd\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.076941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw8ws\" (UniqueName: \"kubernetes.io/projected/81245600-bfa0-4e8f-b194-3298c15da964-kube-api-access-pw8ws\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.077254 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-run-httpd\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.080703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-scripts\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.081170 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.082347 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-config-data\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.082878 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.096956 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.098752 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw8ws\" (UniqueName: \"kubernetes.io/projected/81245600-bfa0-4e8f-b194-3298c15da964-kube-api-access-pw8ws\") pod \"ceilometer-0\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.232287 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.434613 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"646e0310-8e32-4f52-9441-e2e2ce66ed75","Type":"ContainerStarted","Data":"ee94eab61baf1e6efc6f642a297eb185b906695f30a5d2f800c39767553f00c6"} Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.434984 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"646e0310-8e32-4f52-9441-e2e2ce66ed75","Type":"ContainerStarted","Data":"25cbdacba464a48688a030bf4e669d3497bb69d3d245c72e6a26f4e70c20452c"} Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.462634 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.902156814 podStartE2EDuration="13.462609753s" podCreationTimestamp="2026-01-31 08:25:03 +0000 UTC" firstStartedPulling="2026-01-31 08:25:04.295468048 +0000 UTC m=+2936.149354407" lastFinishedPulling="2026-01-31 08:25:14.855920987 +0000 UTC m=+2946.709807346" observedRunningTime="2026-01-31 08:25:16.459641758 +0000 UTC m=+2948.313528127" watchObservedRunningTime="2026-01-31 08:25:16.462609753 +0000 UTC m=+2948.316496122" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.756910 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.820802 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c57bd7-46e8-437c-9e84-e77bcdf5561a" path="/var/lib/kubelet/pods/10c57bd7-46e8-437c-9e84-e77bcdf5561a/volumes" Jan 31 08:25:16 crc kubenswrapper[4826]: I0131 08:25:16.821721 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5c6247d-41c2-41c0-9cf4-17098b60970a" path="/var/lib/kubelet/pods/a5c6247d-41c2-41c0-9cf4-17098b60970a/volumes" Jan 31 08:25:17 crc kubenswrapper[4826]: I0131 08:25:17.454017 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerStarted","Data":"27d547a1a4e0e2f871be88500d1493a237b89dc684c673f55db67d84c2888a61"} Jan 31 08:25:17 crc kubenswrapper[4826]: I0131 08:25:17.540431 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:18 crc kubenswrapper[4826]: I0131 08:25:18.466493 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerStarted","Data":"1cba850a700822a54594303cf3a41f0b058f3dccae25178acf4d66969847e988"} Jan 31 08:25:18 crc kubenswrapper[4826]: I0131 08:25:18.466824 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerStarted","Data":"eb566b28c6f334e6dca83102e0b19c82207e8b1dd338073c20dec142561d53af"} Jan 31 08:25:19 crc kubenswrapper[4826]: I0131 08:25:19.476931 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerStarted","Data":"c768242230483134ff4ddddf3925a33ecd0e6910ee2fccd4025b4e347fd3f738"} Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.503055 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerStarted","Data":"d05338449d1f5c8c7a7cc0d8bd1f014fa99466c0679774db6d3256019151f785"} Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.503518 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.503216 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-central-agent" containerID="cri-o://eb566b28c6f334e6dca83102e0b19c82207e8b1dd338073c20dec142561d53af" gracePeriod=30 Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.503621 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="proxy-httpd" containerID="cri-o://d05338449d1f5c8c7a7cc0d8bd1f014fa99466c0679774db6d3256019151f785" gracePeriod=30 Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.503680 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="sg-core" containerID="cri-o://c768242230483134ff4ddddf3925a33ecd0e6910ee2fccd4025b4e347fd3f738" gracePeriod=30 Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.503821 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-notification-agent" containerID="cri-o://1cba850a700822a54594303cf3a41f0b058f3dccae25178acf4d66969847e988" gracePeriod=30 Jan 31 08:25:22 crc kubenswrapper[4826]: I0131 08:25:22.533109 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.384153967 podStartE2EDuration="7.53308873s" podCreationTimestamp="2026-01-31 08:25:15 +0000 UTC" firstStartedPulling="2026-01-31 08:25:16.754817615 +0000 UTC m=+2948.608703974" lastFinishedPulling="2026-01-31 08:25:21.903752378 +0000 UTC m=+2953.757638737" observedRunningTime="2026-01-31 08:25:22.525406719 +0000 UTC m=+2954.379293078" watchObservedRunningTime="2026-01-31 08:25:22.53308873 +0000 UTC m=+2954.386975089" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516344 4826 generic.go:334] "Generic (PLEG): container finished" podID="81245600-bfa0-4e8f-b194-3298c15da964" containerID="d05338449d1f5c8c7a7cc0d8bd1f014fa99466c0679774db6d3256019151f785" exitCode=0 Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516675 4826 generic.go:334] "Generic (PLEG): container finished" podID="81245600-bfa0-4e8f-b194-3298c15da964" containerID="c768242230483134ff4ddddf3925a33ecd0e6910ee2fccd4025b4e347fd3f738" exitCode=2 Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516689 4826 generic.go:334] "Generic (PLEG): container finished" podID="81245600-bfa0-4e8f-b194-3298c15da964" containerID="1cba850a700822a54594303cf3a41f0b058f3dccae25178acf4d66969847e988" exitCode=0 Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516702 4826 generic.go:334] "Generic (PLEG): container finished" podID="81245600-bfa0-4e8f-b194-3298c15da964" containerID="eb566b28c6f334e6dca83102e0b19c82207e8b1dd338073c20dec142561d53af" exitCode=0 Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516739 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerDied","Data":"d05338449d1f5c8c7a7cc0d8bd1f014fa99466c0679774db6d3256019151f785"} Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516772 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerDied","Data":"c768242230483134ff4ddddf3925a33ecd0e6910ee2fccd4025b4e347fd3f738"} Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516784 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerDied","Data":"1cba850a700822a54594303cf3a41f0b058f3dccae25178acf4d66969847e988"} Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.516796 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerDied","Data":"eb566b28c6f334e6dca83102e0b19c82207e8b1dd338073c20dec142561d53af"} Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.778931 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.805301 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.844736 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-sg-core-conf-yaml\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.845154 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-config-data\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.845375 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw8ws\" (UniqueName: \"kubernetes.io/projected/81245600-bfa0-4e8f-b194-3298c15da964-kube-api-access-pw8ws\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.845590 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-combined-ca-bundle\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.845742 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-run-httpd\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.845855 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-log-httpd\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.845983 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-scripts\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.846147 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-ceilometer-tls-certs\") pod \"81245600-bfa0-4e8f-b194-3298c15da964\" (UID: \"81245600-bfa0-4e8f-b194-3298c15da964\") " Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.846327 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.846495 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.847128 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.847245 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81245600-bfa0-4e8f-b194-3298c15da964-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.853944 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81245600-bfa0-4e8f-b194-3298c15da964-kube-api-access-pw8ws" (OuterVolumeSpecName: "kube-api-access-pw8ws") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "kube-api-access-pw8ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.857088 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-scripts" (OuterVolumeSpecName: "scripts") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.878723 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.921213 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.924625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.951102 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.951186 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.951284 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.951306 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw8ws\" (UniqueName: \"kubernetes.io/projected/81245600-bfa0-4e8f-b194-3298c15da964-kube-api-access-pw8ws\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.951323 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:23 crc kubenswrapper[4826]: I0131 08:25:23.980040 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-config-data" (OuterVolumeSpecName: "config-data") pod "81245600-bfa0-4e8f-b194-3298c15da964" (UID: "81245600-bfa0-4e8f-b194-3298c15da964"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.052067 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81245600-bfa0-4e8f-b194-3298c15da964-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.527746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81245600-bfa0-4e8f-b194-3298c15da964","Type":"ContainerDied","Data":"27d547a1a4e0e2f871be88500d1493a237b89dc684c673f55db67d84c2888a61"} Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.528091 4826 scope.go:117] "RemoveContainer" containerID="d05338449d1f5c8c7a7cc0d8bd1f014fa99466c0679774db6d3256019151f785" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.527798 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.553433 4826 scope.go:117] "RemoveContainer" containerID="c768242230483134ff4ddddf3925a33ecd0e6910ee2fccd4025b4e347fd3f738" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.566837 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.578488 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.588853 4826 scope.go:117] "RemoveContainer" containerID="1cba850a700822a54594303cf3a41f0b058f3dccae25178acf4d66969847e988" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.607938 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:24 crc kubenswrapper[4826]: E0131 08:25:24.609037 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-central-agent" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609125 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-central-agent" Jan 31 08:25:24 crc kubenswrapper[4826]: E0131 08:25:24.609215 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="proxy-httpd" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609268 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="proxy-httpd" Jan 31 08:25:24 crc kubenswrapper[4826]: E0131 08:25:24.609334 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="sg-core" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609386 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="sg-core" Jan 31 08:25:24 crc kubenswrapper[4826]: E0131 08:25:24.609466 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-notification-agent" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609520 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-notification-agent" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609802 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="proxy-httpd" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609878 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="sg-core" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.609940 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-central-agent" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.610038 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="81245600-bfa0-4e8f-b194-3298c15da964" containerName="ceilometer-notification-agent" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.612311 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.617701 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.631183 4826 scope.go:117] "RemoveContainer" containerID="eb566b28c6f334e6dca83102e0b19c82207e8b1dd338073c20dec142561d53af" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.631810 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.632262 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.633232 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.662739 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-config-data\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663112 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/460b39a1-e8da-444b-b92c-fb9acec1dd12-run-httpd\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-scripts\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663449 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/460b39a1-e8da-444b-b92c-fb9acec1dd12-log-httpd\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663750 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663821 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fqp\" (UniqueName: \"kubernetes.io/projected/460b39a1-e8da-444b-b92c-fb9acec1dd12-kube-api-access-42fqp\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.663867 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.765885 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/460b39a1-e8da-444b-b92c-fb9acec1dd12-log-httpd\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.765991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766038 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766084 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fqp\" (UniqueName: \"kubernetes.io/projected/460b39a1-e8da-444b-b92c-fb9acec1dd12-kube-api-access-42fqp\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766144 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-config-data\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/460b39a1-e8da-444b-b92c-fb9acec1dd12-run-httpd\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766253 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-scripts\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766470 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/460b39a1-e8da-444b-b92c-fb9acec1dd12-log-httpd\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.766925 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/460b39a1-e8da-444b-b92c-fb9acec1dd12-run-httpd\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.771632 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.771644 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.772995 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.780471 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-config-data\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.783302 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460b39a1-e8da-444b-b92c-fb9acec1dd12-scripts\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.786537 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fqp\" (UniqueName: \"kubernetes.io/projected/460b39a1-e8da-444b-b92c-fb9acec1dd12-kube-api-access-42fqp\") pod \"ceilometer-0\" (UID: \"460b39a1-e8da-444b-b92c-fb9acec1dd12\") " pod="openstack/ceilometer-0" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.820594 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81245600-bfa0-4e8f-b194-3298c15da964" path="/var/lib/kubelet/pods/81245600-bfa0-4e8f-b194-3298c15da964/volumes" Jan 31 08:25:24 crc kubenswrapper[4826]: I0131 08:25:24.938264 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 08:25:25 crc kubenswrapper[4826]: I0131 08:25:25.448955 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 08:25:25 crc kubenswrapper[4826]: I0131 08:25:25.499531 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 08:25:25 crc kubenswrapper[4826]: I0131 08:25:25.542141 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:25 crc kubenswrapper[4826]: I0131 08:25:25.554790 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"460b39a1-e8da-444b-b92c-fb9acec1dd12","Type":"ContainerStarted","Data":"f4b4d62c0731121c33d38007245b64ec5ef0997e6ecc0fb13d85aebde85ac8a1"} Jan 31 08:25:25 crc kubenswrapper[4826]: I0131 08:25:25.556067 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="probe" containerID="cri-o://b80e53fa342e5a70b50dc5bb1da0b9e612b5c46f6a4e28f92bf3a2dcdb7608a0" gracePeriod=30 Jan 31 08:25:25 crc kubenswrapper[4826]: I0131 08:25:25.556176 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="manila-scheduler" containerID="cri-o://59afa85e87c188041efbbaf237069d099f424f7a8fc222ed13e6bf2e2249ab7d" gracePeriod=30 Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.568300 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"460b39a1-e8da-444b-b92c-fb9acec1dd12","Type":"ContainerStarted","Data":"d89e6d8cc706aeb16a1056c1ebc13f12dc71f5391a11e6b50b446c0704ea9626"} Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.572960 4826 generic.go:334] "Generic (PLEG): container finished" podID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerID="b80e53fa342e5a70b50dc5bb1da0b9e612b5c46f6a4e28f92bf3a2dcdb7608a0" exitCode=0 Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.573082 4826 generic.go:334] "Generic (PLEG): container finished" podID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerID="59afa85e87c188041efbbaf237069d099f424f7a8fc222ed13e6bf2e2249ab7d" exitCode=0 Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.573082 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"584f29ed-f9cf-49fd-9743-8cca3fe557ce","Type":"ContainerDied","Data":"b80e53fa342e5a70b50dc5bb1da0b9e612b5c46f6a4e28f92bf3a2dcdb7608a0"} Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.573162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"584f29ed-f9cf-49fd-9743-8cca3fe557ce","Type":"ContainerDied","Data":"59afa85e87c188041efbbaf237069d099f424f7a8fc222ed13e6bf2e2249ab7d"} Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.728366 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923013 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data-custom\") pod \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923106 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-combined-ca-bundle\") pod \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923391 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data\") pod \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923437 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/584f29ed-f9cf-49fd-9743-8cca3fe557ce-etc-machine-id\") pod \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923467 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4hsb\" (UniqueName: \"kubernetes.io/projected/584f29ed-f9cf-49fd-9743-8cca3fe557ce-kube-api-access-c4hsb\") pod \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923579 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-scripts\") pod \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\" (UID: \"584f29ed-f9cf-49fd-9743-8cca3fe557ce\") " Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.923923 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/584f29ed-f9cf-49fd-9743-8cca3fe557ce-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "584f29ed-f9cf-49fd-9743-8cca3fe557ce" (UID: "584f29ed-f9cf-49fd-9743-8cca3fe557ce"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.925983 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/584f29ed-f9cf-49fd-9743-8cca3fe557ce-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.927408 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-scripts" (OuterVolumeSpecName: "scripts") pod "584f29ed-f9cf-49fd-9743-8cca3fe557ce" (UID: "584f29ed-f9cf-49fd-9743-8cca3fe557ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.929070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "584f29ed-f9cf-49fd-9743-8cca3fe557ce" (UID: "584f29ed-f9cf-49fd-9743-8cca3fe557ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.934241 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584f29ed-f9cf-49fd-9743-8cca3fe557ce-kube-api-access-c4hsb" (OuterVolumeSpecName: "kube-api-access-c4hsb") pod "584f29ed-f9cf-49fd-9743-8cca3fe557ce" (UID: "584f29ed-f9cf-49fd-9743-8cca3fe557ce"). InnerVolumeSpecName "kube-api-access-c4hsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:26 crc kubenswrapper[4826]: I0131 08:25:26.977899 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "584f29ed-f9cf-49fd-9743-8cca3fe557ce" (UID: "584f29ed-f9cf-49fd-9743-8cca3fe557ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.022311 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data" (OuterVolumeSpecName: "config-data") pod "584f29ed-f9cf-49fd-9743-8cca3fe557ce" (UID: "584f29ed-f9cf-49fd-9743-8cca3fe557ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.027770 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.027837 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.027851 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.027863 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4hsb\" (UniqueName: \"kubernetes.io/projected/584f29ed-f9cf-49fd-9743-8cca3fe557ce-kube-api-access-c4hsb\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.027876 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584f29ed-f9cf-49fd-9743-8cca3fe557ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.377074 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.377416 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.583184 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.583176 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"584f29ed-f9cf-49fd-9743-8cca3fe557ce","Type":"ContainerDied","Data":"46856d2862c2b594acb2e333c6bf823c3d4b6bef66fd7bd24bcd7342bf856de8"} Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.583737 4826 scope.go:117] "RemoveContainer" containerID="b80e53fa342e5a70b50dc5bb1da0b9e612b5c46f6a4e28f92bf3a2dcdb7608a0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.593611 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"460b39a1-e8da-444b-b92c-fb9acec1dd12","Type":"ContainerStarted","Data":"ba926cb5020360cf3a88bc15afa1f4c9a6636cef82a1aad7fa0eaae27b661f47"} Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.691853 4826 scope.go:117] "RemoveContainer" containerID="59afa85e87c188041efbbaf237069d099f424f7a8fc222ed13e6bf2e2249ab7d" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.720120 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.746839 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.775894 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:27 crc kubenswrapper[4826]: E0131 08:25:27.777642 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="manila-scheduler" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.777674 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="manila-scheduler" Jan 31 08:25:27 crc kubenswrapper[4826]: E0131 08:25:27.777719 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="probe" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.777727 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="probe" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.777915 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="probe" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.777926 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" containerName="manila-scheduler" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.779461 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.781792 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.783995 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.949456 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnld8\" (UniqueName: \"kubernetes.io/projected/232b622b-129f-47af-895a-667ef009ae88-kube-api-access-wnld8\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.949596 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-scripts\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.949644 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/232b622b-129f-47af-895a-667ef009ae88-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.950220 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.950268 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:27 crc kubenswrapper[4826]: I0131 08:25:27.950362 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-config-data\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.052161 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnld8\" (UniqueName: \"kubernetes.io/projected/232b622b-129f-47af-895a-667ef009ae88-kube-api-access-wnld8\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.052611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-scripts\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.052652 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/232b622b-129f-47af-895a-667ef009ae88-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.052723 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.052757 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.052831 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-config-data\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.053096 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/232b622b-129f-47af-895a-667ef009ae88-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.059541 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.059563 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.059753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-config-data\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.064487 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232b622b-129f-47af-895a-667ef009ae88-scripts\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.071617 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnld8\" (UniqueName: \"kubernetes.io/projected/232b622b-129f-47af-895a-667ef009ae88-kube-api-access-wnld8\") pod \"manila-scheduler-0\" (UID: \"232b622b-129f-47af-895a-667ef009ae88\") " pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.098223 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.604581 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"460b39a1-e8da-444b-b92c-fb9acec1dd12","Type":"ContainerStarted","Data":"a1cd0cde52726f23390f7e429c09faa4d45ff9d7fc4b400607afc05382854f6c"} Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.759002 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 08:25:28 crc kubenswrapper[4826]: I0131 08:25:28.835669 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584f29ed-f9cf-49fd-9743-8cca3fe557ce" path="/var/lib/kubelet/pods/584f29ed-f9cf-49fd-9743-8cca3fe557ce/volumes" Jan 31 08:25:29 crc kubenswrapper[4826]: I0131 08:25:29.618544 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"232b622b-129f-47af-895a-667ef009ae88","Type":"ContainerStarted","Data":"bffd0ef301c7438ba5c5df8e47afa1479a44f9fa556bbc88183908c6751b1ace"} Jan 31 08:25:29 crc kubenswrapper[4826]: I0131 08:25:29.618804 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"232b622b-129f-47af-895a-667ef009ae88","Type":"ContainerStarted","Data":"260e25d5d4650b51f2fdd9897199df94550a84a9a5f9dc3397e31f22f58d4add"} Jan 31 08:25:30 crc kubenswrapper[4826]: I0131 08:25:30.499016 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 31 08:25:30 crc kubenswrapper[4826]: I0131 08:25:30.647693 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"232b622b-129f-47af-895a-667ef009ae88","Type":"ContainerStarted","Data":"77bc8c281611d88df9264731ad7de74b775d1d6dadf5705c37e68e3151f4ad9b"} Jan 31 08:25:30 crc kubenswrapper[4826]: I0131 08:25:30.676596 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.67657695 podStartE2EDuration="3.67657695s" podCreationTimestamp="2026-01-31 08:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:25:30.671250117 +0000 UTC m=+2962.525136496" watchObservedRunningTime="2026-01-31 08:25:30.67657695 +0000 UTC m=+2962.530463309" Jan 31 08:25:31 crc kubenswrapper[4826]: I0131 08:25:31.661365 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"460b39a1-e8da-444b-b92c-fb9acec1dd12","Type":"ContainerStarted","Data":"576101596ee0200cc23c84905569a3813ddc41e4cab6c5cfe4d9a4905fe262cb"} Jan 31 08:25:31 crc kubenswrapper[4826]: I0131 08:25:31.661942 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 08:25:31 crc kubenswrapper[4826]: I0131 08:25:31.700432 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4192007110000002 podStartE2EDuration="7.700408638s" podCreationTimestamp="2026-01-31 08:25:24 +0000 UTC" firstStartedPulling="2026-01-31 08:25:25.452917073 +0000 UTC m=+2957.306803432" lastFinishedPulling="2026-01-31 08:25:30.734125 +0000 UTC m=+2962.588011359" observedRunningTime="2026-01-31 08:25:31.682947817 +0000 UTC m=+2963.536834176" watchObservedRunningTime="2026-01-31 08:25:31.700408638 +0000 UTC m=+2963.554294997" Jan 31 08:25:35 crc kubenswrapper[4826]: I0131 08:25:35.498842 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 08:25:35 crc kubenswrapper[4826]: I0131 08:25:35.573572 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:35 crc kubenswrapper[4826]: I0131 08:25:35.696590 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="manila-share" containerID="cri-o://25cbdacba464a48688a030bf4e669d3497bb69d3d245c72e6a26f4e70c20452c" gracePeriod=30 Jan 31 08:25:35 crc kubenswrapper[4826]: I0131 08:25:35.696674 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="probe" containerID="cri-o://ee94eab61baf1e6efc6f642a297eb185b906695f30a5d2f800c39767553f00c6" gracePeriod=30 Jan 31 08:25:36 crc kubenswrapper[4826]: I0131 08:25:36.707262 4826 generic.go:334] "Generic (PLEG): container finished" podID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerID="ee94eab61baf1e6efc6f642a297eb185b906695f30a5d2f800c39767553f00c6" exitCode=0 Jan 31 08:25:36 crc kubenswrapper[4826]: I0131 08:25:36.707553 4826 generic.go:334] "Generic (PLEG): container finished" podID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerID="25cbdacba464a48688a030bf4e669d3497bb69d3d245c72e6a26f4e70c20452c" exitCode=1 Jan 31 08:25:36 crc kubenswrapper[4826]: I0131 08:25:36.707349 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"646e0310-8e32-4f52-9441-e2e2ce66ed75","Type":"ContainerDied","Data":"ee94eab61baf1e6efc6f642a297eb185b906695f30a5d2f800c39767553f00c6"} Jan 31 08:25:36 crc kubenswrapper[4826]: I0131 08:25:36.707598 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"646e0310-8e32-4f52-9441-e2e2ce66ed75","Type":"ContainerDied","Data":"25cbdacba464a48688a030bf4e669d3497bb69d3d245c72e6a26f4e70c20452c"} Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.175075 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-etc-machine-id\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295341 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-combined-ca-bundle\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295406 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-ceph\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295446 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data-custom\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295438 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295569 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-var-lib-manila\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295661 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295665 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgshx\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-kube-api-access-fgshx\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295781 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.295821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-scripts\") pod \"646e0310-8e32-4f52-9441-e2e2ce66ed75\" (UID: \"646e0310-8e32-4f52-9441-e2e2ce66ed75\") " Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.296281 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.296299 4826 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/646e0310-8e32-4f52-9441-e2e2ce66ed75-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.301318 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-kube-api-access-fgshx" (OuterVolumeSpecName: "kube-api-access-fgshx") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "kube-api-access-fgshx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.301461 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-ceph" (OuterVolumeSpecName: "ceph") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.302024 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.307161 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-scripts" (OuterVolumeSpecName: "scripts") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.365245 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.392930 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data" (OuterVolumeSpecName: "config-data") pod "646e0310-8e32-4f52-9441-e2e2ce66ed75" (UID: "646e0310-8e32-4f52-9441-e2e2ce66ed75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.397853 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.397885 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.397897 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.397911 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgshx\" (UniqueName: \"kubernetes.io/projected/646e0310-8e32-4f52-9441-e2e2ce66ed75-kube-api-access-fgshx\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.397921 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.397930 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646e0310-8e32-4f52-9441-e2e2ce66ed75-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.727333 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"646e0310-8e32-4f52-9441-e2e2ce66ed75","Type":"ContainerDied","Data":"880bd9c763faeaddc3d5cf60817f6fcaa65e3a19f0fb7a3a2d582f50457bf4c3"} Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.727411 4826 scope.go:117] "RemoveContainer" containerID="ee94eab61baf1e6efc6f642a297eb185b906695f30a5d2f800c39767553f00c6" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.727627 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.766928 4826 scope.go:117] "RemoveContainer" containerID="25cbdacba464a48688a030bf4e669d3497bb69d3d245c72e6a26f4e70c20452c" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.771795 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.777787 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.805645 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:37 crc kubenswrapper[4826]: E0131 08:25:37.806039 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="manila-share" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.806060 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="manila-share" Jan 31 08:25:37 crc kubenswrapper[4826]: E0131 08:25:37.806124 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="probe" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.806134 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="probe" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.806315 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="manila-share" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.806332 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" containerName="probe" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.807457 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.811282 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 08:25:37 crc kubenswrapper[4826]: I0131 08:25:37.813486 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010059 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010132 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/62702499-49e3-4ea5-b2da-a4bae827517d-ceph\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010494 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010539 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-scripts\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010558 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-config-data\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010607 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgtz\" (UniqueName: \"kubernetes.io/projected/62702499-49e3-4ea5-b2da-a4bae827517d-kube-api-access-5jgtz\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010627 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62702499-49e3-4ea5-b2da-a4bae827517d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.010640 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/62702499-49e3-4ea5-b2da-a4bae827517d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.098452 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.111991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/62702499-49e3-4ea5-b2da-a4bae827517d-ceph\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112221 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112283 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-scripts\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112317 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-config-data\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112366 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgtz\" (UniqueName: \"kubernetes.io/projected/62702499-49e3-4ea5-b2da-a4bae827517d-kube-api-access-5jgtz\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112388 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62702499-49e3-4ea5-b2da-a4bae827517d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112410 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/62702499-49e3-4ea5-b2da-a4bae827517d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.112488 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.113053 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/62702499-49e3-4ea5-b2da-a4bae827517d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.114040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62702499-49e3-4ea5-b2da-a4bae827517d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.117333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-scripts\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.117366 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.126834 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.129694 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgtz\" (UniqueName: \"kubernetes.io/projected/62702499-49e3-4ea5-b2da-a4bae827517d-kube-api-access-5jgtz\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.132828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62702499-49e3-4ea5-b2da-a4bae827517d-config-data\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.133718 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/62702499-49e3-4ea5-b2da-a4bae827517d-ceph\") pod \"manila-share-share1-0\" (UID: \"62702499-49e3-4ea5-b2da-a4bae827517d\") " pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.428176 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 08:25:38 crc kubenswrapper[4826]: I0131 08:25:38.828869 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646e0310-8e32-4f52-9441-e2e2ce66ed75" path="/var/lib/kubelet/pods/646e0310-8e32-4f52-9441-e2e2ce66ed75/volumes" Jan 31 08:25:39 crc kubenswrapper[4826]: I0131 08:25:39.004350 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 08:25:39 crc kubenswrapper[4826]: W0131 08:25:39.009140 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62702499_49e3_4ea5_b2da_a4bae827517d.slice/crio-e52d2395e121863845722b9c0ceeb70e20e1578638ac35e41262ca156bd7583a WatchSource:0}: Error finding container e52d2395e121863845722b9c0ceeb70e20e1578638ac35e41262ca156bd7583a: Status 404 returned error can't find the container with id e52d2395e121863845722b9c0ceeb70e20e1578638ac35e41262ca156bd7583a Jan 31 08:25:39 crc kubenswrapper[4826]: I0131 08:25:39.761052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"62702499-49e3-4ea5-b2da-a4bae827517d","Type":"ContainerStarted","Data":"fcaff5182972186ef3ace3efced5a16a7608ed9f57272a96cf0a7a9321e4fec2"} Jan 31 08:25:39 crc kubenswrapper[4826]: I0131 08:25:39.761715 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"62702499-49e3-4ea5-b2da-a4bae827517d","Type":"ContainerStarted","Data":"e52d2395e121863845722b9c0ceeb70e20e1578638ac35e41262ca156bd7583a"} Jan 31 08:25:40 crc kubenswrapper[4826]: I0131 08:25:40.769363 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"62702499-49e3-4ea5-b2da-a4bae827517d","Type":"ContainerStarted","Data":"1e9e668a02a7659b16ff1d7973db25598181348177d0cf04f4abdc86cba94d89"} Jan 31 08:25:40 crc kubenswrapper[4826]: I0131 08:25:40.797147 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.79712621 podStartE2EDuration="3.79712621s" podCreationTimestamp="2026-01-31 08:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:25:40.791888409 +0000 UTC m=+2972.645774768" watchObservedRunningTime="2026-01-31 08:25:40.79712621 +0000 UTC m=+2972.651012589" Jan 31 08:25:48 crc kubenswrapper[4826]: I0131 08:25:48.429991 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 08:25:49 crc kubenswrapper[4826]: I0131 08:25:49.695418 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 08:25:54 crc kubenswrapper[4826]: I0131 08:25:54.946747 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 08:25:57 crc kubenswrapper[4826]: I0131 08:25:57.376697 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:25:57 crc kubenswrapper[4826]: I0131 08:25:57.377121 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:26:00 crc kubenswrapper[4826]: I0131 08:26:00.123074 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 08:26:22 crc kubenswrapper[4826]: E0131 08:26:22.773374 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:38922->38.102.83.13:46067: write tcp 38.102.83.13:38922->38.102.83.13:46067: write: broken pipe Jan 31 08:26:27 crc kubenswrapper[4826]: I0131 08:26:27.377372 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:26:27 crc kubenswrapper[4826]: I0131 08:26:27.378031 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:26:27 crc kubenswrapper[4826]: I0131 08:26:27.378095 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:26:27 crc kubenswrapper[4826]: I0131 08:26:27.378945 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:26:27 crc kubenswrapper[4826]: I0131 08:26:27.379037 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" gracePeriod=600 Jan 31 08:26:27 crc kubenswrapper[4826]: E0131 08:26:27.513624 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:26:28 crc kubenswrapper[4826]: I0131 08:26:28.213611 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" exitCode=0 Jan 31 08:26:28 crc kubenswrapper[4826]: I0131 08:26:28.213660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea"} Jan 31 08:26:28 crc kubenswrapper[4826]: I0131 08:26:28.213706 4826 scope.go:117] "RemoveContainer" containerID="de646e7eb20e44a926cca255169dcb2595a5e6b359b5609320726326f8a069e5" Jan 31 08:26:28 crc kubenswrapper[4826]: I0131 08:26:28.214578 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:26:28 crc kubenswrapper[4826]: E0131 08:26:28.215051 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:26:42 crc kubenswrapper[4826]: I0131 08:26:42.810091 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:26:42 crc kubenswrapper[4826]: E0131 08:26:42.811213 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.069411 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.071550 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.081587 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.083175 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.083764 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v4wfk" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.086529 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.097406 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200003 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200070 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200126 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200167 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnlw\" (UniqueName: \"kubernetes.io/projected/e068e101-7fa0-42fa-b34b-fb9ba93466aa-kube-api-access-pgnlw\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200182 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200231 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200276 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200308 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200415 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.200626 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.303640 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.303707 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgnlw\" (UniqueName: \"kubernetes.io/projected/e068e101-7fa0-42fa-b34b-fb9ba93466aa-kube-api-access-pgnlw\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.303746 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.303771 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.304100 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.304135 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.304203 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.305645 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.306883 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.307611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.307696 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.307767 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.309420 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.310017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.310531 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.310594 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.310868 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.313888 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.316386 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.319305 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgnlw\" (UniqueName: \"kubernetes.io/projected/e068e101-7fa0-42fa-b34b-fb9ba93466aa-kube-api-access-pgnlw\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.337777 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.390681 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:26:49 crc kubenswrapper[4826]: I0131 08:26:49.933239 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 31 08:26:49 crc kubenswrapper[4826]: W0131 08:26:49.940651 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode068e101_7fa0_42fa_b34b_fb9ba93466aa.slice/crio-f0499a2af22dcccd7d569529082e693ecd452c59a85bc6712b9cadd1e93dd382 WatchSource:0}: Error finding container f0499a2af22dcccd7d569529082e693ecd452c59a85bc6712b9cadd1e93dd382: Status 404 returned error can't find the container with id f0499a2af22dcccd7d569529082e693ecd452c59a85bc6712b9cadd1e93dd382 Jan 31 08:26:50 crc kubenswrapper[4826]: I0131 08:26:50.398289 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"e068e101-7fa0-42fa-b34b-fb9ba93466aa","Type":"ContainerStarted","Data":"f0499a2af22dcccd7d569529082e693ecd452c59a85bc6712b9cadd1e93dd382"} Jan 31 08:26:57 crc kubenswrapper[4826]: I0131 08:26:57.809005 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:26:57 crc kubenswrapper[4826]: E0131 08:26:57.809894 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:27:09 crc kubenswrapper[4826]: I0131 08:27:09.809500 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:27:09 crc kubenswrapper[4826]: E0131 08:27:09.810541 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:27:24 crc kubenswrapper[4826]: I0131 08:27:24.808740 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:27:24 crc kubenswrapper[4826]: E0131 08:27:24.809816 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:27:29 crc kubenswrapper[4826]: E0131 08:27:29.372302 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 08:27:29 crc kubenswrapper[4826]: E0131 08:27:29.374198 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(e068e101-7fa0-42fa-b34b-fb9ba93466aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 08:27:29 crc kubenswrapper[4826]: E0131 08:27:29.375548 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="e068e101-7fa0-42fa-b34b-fb9ba93466aa" Jan 31 08:27:29 crc kubenswrapper[4826]: E0131 08:27:29.782345 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="e068e101-7fa0-42fa-b34b-fb9ba93466aa" Jan 31 08:27:36 crc kubenswrapper[4826]: I0131 08:27:36.808905 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:27:36 crc kubenswrapper[4826]: E0131 08:27:36.809784 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:27:41 crc kubenswrapper[4826]: I0131 08:27:41.509419 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 08:27:42 crc kubenswrapper[4826]: I0131 08:27:42.890772 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"e068e101-7fa0-42fa-b34b-fb9ba93466aa","Type":"ContainerStarted","Data":"45145f14ded6d40a4b73423be9c97d59626c50b7fe2b67a67b1ab090ce17e78e"} Jan 31 08:27:42 crc kubenswrapper[4826]: I0131 08:27:42.918459 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=3.360370829 podStartE2EDuration="54.91843576s" podCreationTimestamp="2026-01-31 08:26:48 +0000 UTC" firstStartedPulling="2026-01-31 08:26:49.947297616 +0000 UTC m=+3041.801183975" lastFinishedPulling="2026-01-31 08:27:41.505362547 +0000 UTC m=+3093.359248906" observedRunningTime="2026-01-31 08:27:42.913879139 +0000 UTC m=+3094.767765498" watchObservedRunningTime="2026-01-31 08:27:42.91843576 +0000 UTC m=+3094.772322119" Jan 31 08:27:51 crc kubenswrapper[4826]: I0131 08:27:51.809336 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:27:51 crc kubenswrapper[4826]: E0131 08:27:51.810350 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:28:03 crc kubenswrapper[4826]: I0131 08:28:03.810067 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:28:03 crc kubenswrapper[4826]: E0131 08:28:03.811545 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:28:16 crc kubenswrapper[4826]: I0131 08:28:16.809431 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:28:16 crc kubenswrapper[4826]: E0131 08:28:16.810776 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:28:30 crc kubenswrapper[4826]: I0131 08:28:30.809834 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:28:30 crc kubenswrapper[4826]: E0131 08:28:30.810514 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:28:43 crc kubenswrapper[4826]: I0131 08:28:43.810463 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:28:43 crc kubenswrapper[4826]: E0131 08:28:43.811854 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:28:58 crc kubenswrapper[4826]: I0131 08:28:58.817833 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:28:58 crc kubenswrapper[4826]: E0131 08:28:58.819056 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:28:59 crc kubenswrapper[4826]: I0131 08:28:59.872839 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-27nz8"] Jan 31 08:28:59 crc kubenswrapper[4826]: I0131 08:28:59.876143 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:28:59 crc kubenswrapper[4826]: I0131 08:28:59.910034 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27nz8"] Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.055343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p52mk\" (UniqueName: \"kubernetes.io/projected/e082f5fb-302c-424f-ab3e-e53d2e524f65-kube-api-access-p52mk\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.055477 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-utilities\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.055535 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-catalog-content\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.157336 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-utilities\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.157423 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-catalog-content\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.157550 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p52mk\" (UniqueName: \"kubernetes.io/projected/e082f5fb-302c-424f-ab3e-e53d2e524f65-kube-api-access-p52mk\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.157959 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-utilities\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.158334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-catalog-content\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.186063 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p52mk\" (UniqueName: \"kubernetes.io/projected/e082f5fb-302c-424f-ab3e-e53d2e524f65-kube-api-access-p52mk\") pod \"certified-operators-27nz8\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:00 crc kubenswrapper[4826]: I0131 08:29:00.211916 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:01 crc kubenswrapper[4826]: I0131 08:29:01.918147 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27nz8"] Jan 31 08:29:02 crc kubenswrapper[4826]: I0131 08:29:02.572582 4826 generic.go:334] "Generic (PLEG): container finished" podID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerID="80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944" exitCode=0 Jan 31 08:29:02 crc kubenswrapper[4826]: I0131 08:29:02.572771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerDied","Data":"80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944"} Jan 31 08:29:02 crc kubenswrapper[4826]: I0131 08:29:02.572888 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerStarted","Data":"0126b571663cb2ab3db8c9fce200a4885b7607eafd2d2de1611cae3f0bd72ec7"} Jan 31 08:29:05 crc kubenswrapper[4826]: I0131 08:29:05.603429 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerStarted","Data":"5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080"} Jan 31 08:29:07 crc kubenswrapper[4826]: I0131 08:29:07.624087 4826 generic.go:334] "Generic (PLEG): container finished" podID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerID="5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080" exitCode=0 Jan 31 08:29:07 crc kubenswrapper[4826]: I0131 08:29:07.624148 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerDied","Data":"5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080"} Jan 31 08:29:08 crc kubenswrapper[4826]: I0131 08:29:08.636604 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerStarted","Data":"d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075"} Jan 31 08:29:08 crc kubenswrapper[4826]: I0131 08:29:08.661680 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-27nz8" podStartSLOduration=3.872447957 podStartE2EDuration="9.661659576s" podCreationTimestamp="2026-01-31 08:28:59 +0000 UTC" firstStartedPulling="2026-01-31 08:29:02.575238602 +0000 UTC m=+3174.429124961" lastFinishedPulling="2026-01-31 08:29:08.364450221 +0000 UTC m=+3180.218336580" observedRunningTime="2026-01-31 08:29:08.652897295 +0000 UTC m=+3180.506783654" watchObservedRunningTime="2026-01-31 08:29:08.661659576 +0000 UTC m=+3180.515545935" Jan 31 08:29:10 crc kubenswrapper[4826]: I0131 08:29:10.212149 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:10 crc kubenswrapper[4826]: I0131 08:29:10.212507 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:10 crc kubenswrapper[4826]: I0131 08:29:10.264701 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:10 crc kubenswrapper[4826]: I0131 08:29:10.809653 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:29:10 crc kubenswrapper[4826]: E0131 08:29:10.810365 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:29:20 crc kubenswrapper[4826]: I0131 08:29:20.284596 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:20 crc kubenswrapper[4826]: I0131 08:29:20.341291 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27nz8"] Jan 31 08:29:20 crc kubenswrapper[4826]: I0131 08:29:20.742901 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-27nz8" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="registry-server" containerID="cri-o://d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075" gracePeriod=2 Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.172658 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.285933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-catalog-content\") pod \"e082f5fb-302c-424f-ab3e-e53d2e524f65\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.286209 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-utilities\") pod \"e082f5fb-302c-424f-ab3e-e53d2e524f65\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.286476 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p52mk\" (UniqueName: \"kubernetes.io/projected/e082f5fb-302c-424f-ab3e-e53d2e524f65-kube-api-access-p52mk\") pod \"e082f5fb-302c-424f-ab3e-e53d2e524f65\" (UID: \"e082f5fb-302c-424f-ab3e-e53d2e524f65\") " Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.287307 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-utilities" (OuterVolumeSpecName: "utilities") pod "e082f5fb-302c-424f-ab3e-e53d2e524f65" (UID: "e082f5fb-302c-424f-ab3e-e53d2e524f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.293039 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e082f5fb-302c-424f-ab3e-e53d2e524f65-kube-api-access-p52mk" (OuterVolumeSpecName: "kube-api-access-p52mk") pod "e082f5fb-302c-424f-ab3e-e53d2e524f65" (UID: "e082f5fb-302c-424f-ab3e-e53d2e524f65"). InnerVolumeSpecName "kube-api-access-p52mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.341644 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e082f5fb-302c-424f-ab3e-e53d2e524f65" (UID: "e082f5fb-302c-424f-ab3e-e53d2e524f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.389515 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.389567 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e082f5fb-302c-424f-ab3e-e53d2e524f65-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.389577 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p52mk\" (UniqueName: \"kubernetes.io/projected/e082f5fb-302c-424f-ab3e-e53d2e524f65-kube-api-access-p52mk\") on node \"crc\" DevicePath \"\"" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.794671 4826 generic.go:334] "Generic (PLEG): container finished" podID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerID="d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075" exitCode=0 Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.794737 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerDied","Data":"d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075"} Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.794792 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27nz8" event={"ID":"e082f5fb-302c-424f-ab3e-e53d2e524f65","Type":"ContainerDied","Data":"0126b571663cb2ab3db8c9fce200a4885b7607eafd2d2de1611cae3f0bd72ec7"} Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.794819 4826 scope.go:117] "RemoveContainer" containerID="d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.794818 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27nz8" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.823223 4826 scope.go:117] "RemoveContainer" containerID="5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.849827 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27nz8"] Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.859528 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-27nz8"] Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.874249 4826 scope.go:117] "RemoveContainer" containerID="80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.898043 4826 scope.go:117] "RemoveContainer" containerID="d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075" Jan 31 08:29:21 crc kubenswrapper[4826]: E0131 08:29:21.898440 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075\": container with ID starting with d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075 not found: ID does not exist" containerID="d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.899133 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075"} err="failed to get container status \"d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075\": rpc error: code = NotFound desc = could not find container \"d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075\": container with ID starting with d5ea36f2c60cc4e8c04d47f8caac46183d38601b939e865bb6972d528aa05075 not found: ID does not exist" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.899162 4826 scope.go:117] "RemoveContainer" containerID="5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080" Jan 31 08:29:21 crc kubenswrapper[4826]: E0131 08:29:21.901221 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080\": container with ID starting with 5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080 not found: ID does not exist" containerID="5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.901252 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080"} err="failed to get container status \"5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080\": rpc error: code = NotFound desc = could not find container \"5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080\": container with ID starting with 5461e3cb06a0f5778dd87d3d69051f3bc6c6a13256ca92b39ea72fe8fd5c9080 not found: ID does not exist" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.901271 4826 scope.go:117] "RemoveContainer" containerID="80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944" Jan 31 08:29:21 crc kubenswrapper[4826]: E0131 08:29:21.901519 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944\": container with ID starting with 80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944 not found: ID does not exist" containerID="80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944" Jan 31 08:29:21 crc kubenswrapper[4826]: I0131 08:29:21.901554 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944"} err="failed to get container status \"80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944\": rpc error: code = NotFound desc = could not find container \"80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944\": container with ID starting with 80e10b7bed40fd3162079d6bdd3707be661bed34354a992a93acd71ddff20944 not found: ID does not exist" Jan 31 08:29:22 crc kubenswrapper[4826]: I0131 08:29:22.809642 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:29:22 crc kubenswrapper[4826]: E0131 08:29:22.810428 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:29:22 crc kubenswrapper[4826]: I0131 08:29:22.819100 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" path="/var/lib/kubelet/pods/e082f5fb-302c-424f-ab3e-e53d2e524f65/volumes" Jan 31 08:29:33 crc kubenswrapper[4826]: I0131 08:29:33.809999 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:29:33 crc kubenswrapper[4826]: E0131 08:29:33.810906 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:29:47 crc kubenswrapper[4826]: I0131 08:29:47.808945 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:29:47 crc kubenswrapper[4826]: E0131 08:29:47.809832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:29:58 crc kubenswrapper[4826]: I0131 08:29:58.816846 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:29:58 crc kubenswrapper[4826]: E0131 08:29:58.817578 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.152867 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l"] Jan 31 08:30:00 crc kubenswrapper[4826]: E0131 08:30:00.153807 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="extract-content" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.153826 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="extract-content" Jan 31 08:30:00 crc kubenswrapper[4826]: E0131 08:30:00.153841 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="registry-server" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.153849 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="registry-server" Jan 31 08:30:00 crc kubenswrapper[4826]: E0131 08:30:00.153859 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="extract-utilities" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.153866 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="extract-utilities" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.154113 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e082f5fb-302c-424f-ab3e-e53d2e524f65" containerName="registry-server" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.154981 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.158782 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.161880 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.165329 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l"] Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.207207 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lzb\" (UniqueName: \"kubernetes.io/projected/5b0f29ff-33e2-480b-a752-db776258687c-kube-api-access-w9lzb\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.207285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0f29ff-33e2-480b-a752-db776258687c-secret-volume\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.207588 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0f29ff-33e2-480b-a752-db776258687c-config-volume\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.309217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0f29ff-33e2-480b-a752-db776258687c-config-volume\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.309407 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lzb\" (UniqueName: \"kubernetes.io/projected/5b0f29ff-33e2-480b-a752-db776258687c-kube-api-access-w9lzb\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.309447 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0f29ff-33e2-480b-a752-db776258687c-secret-volume\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.310476 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0f29ff-33e2-480b-a752-db776258687c-config-volume\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.316828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0f29ff-33e2-480b-a752-db776258687c-secret-volume\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.330839 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lzb\" (UniqueName: \"kubernetes.io/projected/5b0f29ff-33e2-480b-a752-db776258687c-kube-api-access-w9lzb\") pod \"collect-profiles-29497470-p5b4l\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.477279 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:00 crc kubenswrapper[4826]: I0131 08:30:00.907110 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l"] Jan 31 08:30:01 crc kubenswrapper[4826]: I0131 08:30:01.139082 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" event={"ID":"5b0f29ff-33e2-480b-a752-db776258687c","Type":"ContainerStarted","Data":"1347e683cadf626d52fd3d5d926dab8336808221c4908d46ff9d16a443df941a"} Jan 31 08:30:01 crc kubenswrapper[4826]: I0131 08:30:01.139458 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" event={"ID":"5b0f29ff-33e2-480b-a752-db776258687c","Type":"ContainerStarted","Data":"c2dbd0f55a8936bdb371c26529893ef5c82421a51655ff6b062ac9384a61be97"} Jan 31 08:30:01 crc kubenswrapper[4826]: I0131 08:30:01.154092 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" podStartSLOduration=1.154071251 podStartE2EDuration="1.154071251s" podCreationTimestamp="2026-01-31 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:30:01.152407103 +0000 UTC m=+3233.006293472" watchObservedRunningTime="2026-01-31 08:30:01.154071251 +0000 UTC m=+3233.007957610" Jan 31 08:30:02 crc kubenswrapper[4826]: I0131 08:30:02.150422 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b0f29ff-33e2-480b-a752-db776258687c" containerID="1347e683cadf626d52fd3d5d926dab8336808221c4908d46ff9d16a443df941a" exitCode=0 Jan 31 08:30:02 crc kubenswrapper[4826]: I0131 08:30:02.150501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" event={"ID":"5b0f29ff-33e2-480b-a752-db776258687c","Type":"ContainerDied","Data":"1347e683cadf626d52fd3d5d926dab8336808221c4908d46ff9d16a443df941a"} Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.530198 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.572273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0f29ff-33e2-480b-a752-db776258687c-config-volume\") pod \"5b0f29ff-33e2-480b-a752-db776258687c\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.572871 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0f29ff-33e2-480b-a752-db776258687c-secret-volume\") pod \"5b0f29ff-33e2-480b-a752-db776258687c\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.573032 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9lzb\" (UniqueName: \"kubernetes.io/projected/5b0f29ff-33e2-480b-a752-db776258687c-kube-api-access-w9lzb\") pod \"5b0f29ff-33e2-480b-a752-db776258687c\" (UID: \"5b0f29ff-33e2-480b-a752-db776258687c\") " Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.573307 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0f29ff-33e2-480b-a752-db776258687c-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b0f29ff-33e2-480b-a752-db776258687c" (UID: "5b0f29ff-33e2-480b-a752-db776258687c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.574051 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0f29ff-33e2-480b-a752-db776258687c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.588800 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0f29ff-33e2-480b-a752-db776258687c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b0f29ff-33e2-480b-a752-db776258687c" (UID: "5b0f29ff-33e2-480b-a752-db776258687c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.589549 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0f29ff-33e2-480b-a752-db776258687c-kube-api-access-w9lzb" (OuterVolumeSpecName: "kube-api-access-w9lzb") pod "5b0f29ff-33e2-480b-a752-db776258687c" (UID: "5b0f29ff-33e2-480b-a752-db776258687c"). InnerVolumeSpecName "kube-api-access-w9lzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.675466 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0f29ff-33e2-480b-a752-db776258687c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:03 crc kubenswrapper[4826]: I0131 08:30:03.675510 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9lzb\" (UniqueName: \"kubernetes.io/projected/5b0f29ff-33e2-480b-a752-db776258687c-kube-api-access-w9lzb\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:04 crc kubenswrapper[4826]: I0131 08:30:04.169844 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" event={"ID":"5b0f29ff-33e2-480b-a752-db776258687c","Type":"ContainerDied","Data":"c2dbd0f55a8936bdb371c26529893ef5c82421a51655ff6b062ac9384a61be97"} Jan 31 08:30:04 crc kubenswrapper[4826]: I0131 08:30:04.169881 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2dbd0f55a8936bdb371c26529893ef5c82421a51655ff6b062ac9384a61be97" Jan 31 08:30:04 crc kubenswrapper[4826]: I0131 08:30:04.169922 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497470-p5b4l" Jan 31 08:30:04 crc kubenswrapper[4826]: I0131 08:30:04.241919 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b"] Jan 31 08:30:04 crc kubenswrapper[4826]: I0131 08:30:04.249822 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497425-x9m8b"] Jan 31 08:30:04 crc kubenswrapper[4826]: I0131 08:30:04.825802 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410c48f0-0679-4a1f-8fec-9afb56cf0d60" path="/var/lib/kubelet/pods/410c48f0-0679-4a1f-8fec-9afb56cf0d60/volumes" Jan 31 08:30:09 crc kubenswrapper[4826]: I0131 08:30:09.810185 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:30:09 crc kubenswrapper[4826]: E0131 08:30:09.810772 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:30:23 crc kubenswrapper[4826]: I0131 08:30:23.810039 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:30:23 crc kubenswrapper[4826]: E0131 08:30:23.811366 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:30:33 crc kubenswrapper[4826]: I0131 08:30:33.406883 4826 scope.go:117] "RemoveContainer" containerID="40f6c6f0bb44f4ae05946548679c4fc96f047e642652cc1ee0932efca3c56891" Jan 31 08:30:37 crc kubenswrapper[4826]: I0131 08:30:37.808794 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:30:37 crc kubenswrapper[4826]: E0131 08:30:37.809614 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.687503 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7lf"] Jan 31 08:30:40 crc kubenswrapper[4826]: E0131 08:30:40.688601 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0f29ff-33e2-480b-a752-db776258687c" containerName="collect-profiles" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.688640 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0f29ff-33e2-480b-a752-db776258687c" containerName="collect-profiles" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.688904 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0f29ff-33e2-480b-a752-db776258687c" containerName="collect-profiles" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.691049 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.708723 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7lf"] Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.859164 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4h8\" (UniqueName: \"kubernetes.io/projected/f809d52c-de2c-4fcb-bbbc-77109f2ced06-kube-api-access-4x4h8\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.859654 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-utilities\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.859807 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-catalog-content\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.961493 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-utilities\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.961617 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-catalog-content\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.961702 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4h8\" (UniqueName: \"kubernetes.io/projected/f809d52c-de2c-4fcb-bbbc-77109f2ced06-kube-api-access-4x4h8\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.962163 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-utilities\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.963300 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-catalog-content\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:40 crc kubenswrapper[4826]: I0131 08:30:40.993828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4h8\" (UniqueName: \"kubernetes.io/projected/f809d52c-de2c-4fcb-bbbc-77109f2ced06-kube-api-access-4x4h8\") pod \"redhat-marketplace-dk7lf\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:41 crc kubenswrapper[4826]: I0131 08:30:41.028695 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:41 crc kubenswrapper[4826]: I0131 08:30:41.513395 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7lf"] Jan 31 08:30:42 crc kubenswrapper[4826]: I0131 08:30:42.491892 4826 generic.go:334] "Generic (PLEG): container finished" podID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerID="a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503" exitCode=0 Jan 31 08:30:42 crc kubenswrapper[4826]: I0131 08:30:42.492011 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerDied","Data":"a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503"} Jan 31 08:30:42 crc kubenswrapper[4826]: I0131 08:30:42.492226 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerStarted","Data":"c2799ffd7e4326476851d1c697b4fbd47816e30faa35bc82fbeab83aa01868ea"} Jan 31 08:30:42 crc kubenswrapper[4826]: I0131 08:30:42.494795 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:30:43 crc kubenswrapper[4826]: I0131 08:30:43.503810 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerStarted","Data":"8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3"} Jan 31 08:30:44 crc kubenswrapper[4826]: I0131 08:30:44.513108 4826 generic.go:334] "Generic (PLEG): container finished" podID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerID="8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3" exitCode=0 Jan 31 08:30:44 crc kubenswrapper[4826]: I0131 08:30:44.513223 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerDied","Data":"8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3"} Jan 31 08:30:45 crc kubenswrapper[4826]: I0131 08:30:45.530435 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerStarted","Data":"550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30"} Jan 31 08:30:45 crc kubenswrapper[4826]: I0131 08:30:45.556219 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dk7lf" podStartSLOduration=2.903603031 podStartE2EDuration="5.555957481s" podCreationTimestamp="2026-01-31 08:30:40 +0000 UTC" firstStartedPulling="2026-01-31 08:30:42.494539807 +0000 UTC m=+3274.348426166" lastFinishedPulling="2026-01-31 08:30:45.146894257 +0000 UTC m=+3277.000780616" observedRunningTime="2026-01-31 08:30:45.550307919 +0000 UTC m=+3277.404194338" watchObservedRunningTime="2026-01-31 08:30:45.555957481 +0000 UTC m=+3277.409843920" Jan 31 08:30:49 crc kubenswrapper[4826]: I0131 08:30:49.811666 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:30:49 crc kubenswrapper[4826]: E0131 08:30:49.813139 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:30:51 crc kubenswrapper[4826]: I0131 08:30:51.028998 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:51 crc kubenswrapper[4826]: I0131 08:30:51.029331 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:51 crc kubenswrapper[4826]: I0131 08:30:51.085081 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:51 crc kubenswrapper[4826]: I0131 08:30:51.638262 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:51 crc kubenswrapper[4826]: I0131 08:30:51.715302 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7lf"] Jan 31 08:30:53 crc kubenswrapper[4826]: I0131 08:30:53.608735 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dk7lf" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="registry-server" containerID="cri-o://550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30" gracePeriod=2 Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.065280 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.245418 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-utilities\") pod \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.245660 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-catalog-content\") pod \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.245724 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x4h8\" (UniqueName: \"kubernetes.io/projected/f809d52c-de2c-4fcb-bbbc-77109f2ced06-kube-api-access-4x4h8\") pod \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\" (UID: \"f809d52c-de2c-4fcb-bbbc-77109f2ced06\") " Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.246643 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-utilities" (OuterVolumeSpecName: "utilities") pod "f809d52c-de2c-4fcb-bbbc-77109f2ced06" (UID: "f809d52c-de2c-4fcb-bbbc-77109f2ced06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.253567 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f809d52c-de2c-4fcb-bbbc-77109f2ced06-kube-api-access-4x4h8" (OuterVolumeSpecName: "kube-api-access-4x4h8") pod "f809d52c-de2c-4fcb-bbbc-77109f2ced06" (UID: "f809d52c-de2c-4fcb-bbbc-77109f2ced06"). InnerVolumeSpecName "kube-api-access-4x4h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.269603 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f809d52c-de2c-4fcb-bbbc-77109f2ced06" (UID: "f809d52c-de2c-4fcb-bbbc-77109f2ced06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.348650 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.348688 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x4h8\" (UniqueName: \"kubernetes.io/projected/f809d52c-de2c-4fcb-bbbc-77109f2ced06-kube-api-access-4x4h8\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.348702 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f809d52c-de2c-4fcb-bbbc-77109f2ced06-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.621231 4826 generic.go:334] "Generic (PLEG): container finished" podID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerID="550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30" exitCode=0 Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.621282 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerDied","Data":"550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30"} Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.621343 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7lf" event={"ID":"f809d52c-de2c-4fcb-bbbc-77109f2ced06","Type":"ContainerDied","Data":"c2799ffd7e4326476851d1c697b4fbd47816e30faa35bc82fbeab83aa01868ea"} Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.621376 4826 scope.go:117] "RemoveContainer" containerID="550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.621579 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7lf" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.649260 4826 scope.go:117] "RemoveContainer" containerID="8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.670459 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7lf"] Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.681413 4826 scope.go:117] "RemoveContainer" containerID="a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.688026 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7lf"] Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.739953 4826 scope.go:117] "RemoveContainer" containerID="550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30" Jan 31 08:30:54 crc kubenswrapper[4826]: E0131 08:30:54.740451 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30\": container with ID starting with 550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30 not found: ID does not exist" containerID="550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.740482 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30"} err="failed to get container status \"550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30\": rpc error: code = NotFound desc = could not find container \"550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30\": container with ID starting with 550c53423ab625d43447dc93e4f3d58ad12d908cdfc683a6e049513bf8103a30 not found: ID does not exist" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.740509 4826 scope.go:117] "RemoveContainer" containerID="8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3" Jan 31 08:30:54 crc kubenswrapper[4826]: E0131 08:30:54.740864 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3\": container with ID starting with 8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3 not found: ID does not exist" containerID="8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.740893 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3"} err="failed to get container status \"8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3\": rpc error: code = NotFound desc = could not find container \"8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3\": container with ID starting with 8cfddd1559cd2031b6aee7e7adf29275044d8a520e54c59618adde765e34fed3 not found: ID does not exist" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.740912 4826 scope.go:117] "RemoveContainer" containerID="a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503" Jan 31 08:30:54 crc kubenswrapper[4826]: E0131 08:30:54.741223 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503\": container with ID starting with a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503 not found: ID does not exist" containerID="a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.741251 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503"} err="failed to get container status \"a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503\": rpc error: code = NotFound desc = could not find container \"a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503\": container with ID starting with a574730b2da87dedea80c0adcbcf1a9ffd6a2a1d8858d9385b3598c6e385f503 not found: ID does not exist" Jan 31 08:30:54 crc kubenswrapper[4826]: I0131 08:30:54.822483 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" path="/var/lib/kubelet/pods/f809d52c-de2c-4fcb-bbbc-77109f2ced06/volumes" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.810181 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:31:01 crc kubenswrapper[4826]: E0131 08:31:01.811355 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.891318 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gltwm"] Jan 31 08:31:01 crc kubenswrapper[4826]: E0131 08:31:01.891722 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="registry-server" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.891735 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="registry-server" Jan 31 08:31:01 crc kubenswrapper[4826]: E0131 08:31:01.891746 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="extract-utilities" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.891753 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="extract-utilities" Jan 31 08:31:01 crc kubenswrapper[4826]: E0131 08:31:01.891769 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="extract-content" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.891776 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="extract-content" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.891979 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f809d52c-de2c-4fcb-bbbc-77109f2ced06" containerName="registry-server" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.894389 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.908780 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg22w\" (UniqueName: \"kubernetes.io/projected/49e97951-9430-48c5-afbd-6a9d32b4d53f-kube-api-access-sg22w\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.908893 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-utilities\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.908917 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-catalog-content\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:01 crc kubenswrapper[4826]: I0131 08:31:01.912552 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gltwm"] Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.011863 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg22w\" (UniqueName: \"kubernetes.io/projected/49e97951-9430-48c5-afbd-6a9d32b4d53f-kube-api-access-sg22w\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.011978 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-utilities\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.012009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-catalog-content\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.012686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-utilities\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.012758 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-catalog-content\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.041485 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg22w\" (UniqueName: \"kubernetes.io/projected/49e97951-9430-48c5-afbd-6a9d32b4d53f-kube-api-access-sg22w\") pod \"redhat-operators-gltwm\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.225060 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:02 crc kubenswrapper[4826]: I0131 08:31:02.731018 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gltwm"] Jan 31 08:31:02 crc kubenswrapper[4826]: W0131 08:31:02.743720 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49e97951_9430_48c5_afbd_6a9d32b4d53f.slice/crio-c9e50ab7f2a4471b934a260ac8bc882d1f64995f6f88ecf58e60b6abf9f23ac7 WatchSource:0}: Error finding container c9e50ab7f2a4471b934a260ac8bc882d1f64995f6f88ecf58e60b6abf9f23ac7: Status 404 returned error can't find the container with id c9e50ab7f2a4471b934a260ac8bc882d1f64995f6f88ecf58e60b6abf9f23ac7 Jan 31 08:31:03 crc kubenswrapper[4826]: I0131 08:31:03.741473 4826 generic.go:334] "Generic (PLEG): container finished" podID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerID="de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307" exitCode=0 Jan 31 08:31:03 crc kubenswrapper[4826]: I0131 08:31:03.741574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerDied","Data":"de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307"} Jan 31 08:31:03 crc kubenswrapper[4826]: I0131 08:31:03.741731 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerStarted","Data":"c9e50ab7f2a4471b934a260ac8bc882d1f64995f6f88ecf58e60b6abf9f23ac7"} Jan 31 08:31:04 crc kubenswrapper[4826]: I0131 08:31:04.752398 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerStarted","Data":"b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259"} Jan 31 08:31:05 crc kubenswrapper[4826]: I0131 08:31:05.761367 4826 generic.go:334] "Generic (PLEG): container finished" podID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerID="b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259" exitCode=0 Jan 31 08:31:05 crc kubenswrapper[4826]: I0131 08:31:05.761738 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerDied","Data":"b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259"} Jan 31 08:31:06 crc kubenswrapper[4826]: I0131 08:31:06.774523 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerStarted","Data":"11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826"} Jan 31 08:31:06 crc kubenswrapper[4826]: I0131 08:31:06.805043 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gltwm" podStartSLOduration=3.348126241 podStartE2EDuration="5.805013794s" podCreationTimestamp="2026-01-31 08:31:01 +0000 UTC" firstStartedPulling="2026-01-31 08:31:03.743482377 +0000 UTC m=+3295.597368736" lastFinishedPulling="2026-01-31 08:31:06.20036991 +0000 UTC m=+3298.054256289" observedRunningTime="2026-01-31 08:31:06.798186348 +0000 UTC m=+3298.652072707" watchObservedRunningTime="2026-01-31 08:31:06.805013794 +0000 UTC m=+3298.658900163" Jan 31 08:31:12 crc kubenswrapper[4826]: I0131 08:31:12.226032 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:12 crc kubenswrapper[4826]: I0131 08:31:12.226602 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:13 crc kubenswrapper[4826]: I0131 08:31:13.271204 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gltwm" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="registry-server" probeResult="failure" output=< Jan 31 08:31:13 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:31:13 crc kubenswrapper[4826]: > Jan 31 08:31:15 crc kubenswrapper[4826]: I0131 08:31:15.809428 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:31:15 crc kubenswrapper[4826]: E0131 08:31:15.810083 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:31:22 crc kubenswrapper[4826]: I0131 08:31:22.281016 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:22 crc kubenswrapper[4826]: I0131 08:31:22.330779 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:22 crc kubenswrapper[4826]: I0131 08:31:22.523940 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gltwm"] Jan 31 08:31:23 crc kubenswrapper[4826]: I0131 08:31:23.931099 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gltwm" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="registry-server" containerID="cri-o://11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826" gracePeriod=2 Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.587880 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.718963 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-utilities\") pod \"49e97951-9430-48c5-afbd-6a9d32b4d53f\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.719330 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-catalog-content\") pod \"49e97951-9430-48c5-afbd-6a9d32b4d53f\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.719616 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg22w\" (UniqueName: \"kubernetes.io/projected/49e97951-9430-48c5-afbd-6a9d32b4d53f-kube-api-access-sg22w\") pod \"49e97951-9430-48c5-afbd-6a9d32b4d53f\" (UID: \"49e97951-9430-48c5-afbd-6a9d32b4d53f\") " Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.720756 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-utilities" (OuterVolumeSpecName: "utilities") pod "49e97951-9430-48c5-afbd-6a9d32b4d53f" (UID: "49e97951-9430-48c5-afbd-6a9d32b4d53f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.725773 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.727712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e97951-9430-48c5-afbd-6a9d32b4d53f-kube-api-access-sg22w" (OuterVolumeSpecName: "kube-api-access-sg22w") pod "49e97951-9430-48c5-afbd-6a9d32b4d53f" (UID: "49e97951-9430-48c5-afbd-6a9d32b4d53f"). InnerVolumeSpecName "kube-api-access-sg22w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.830109 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg22w\" (UniqueName: \"kubernetes.io/projected/49e97951-9430-48c5-afbd-6a9d32b4d53f-kube-api-access-sg22w\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.843363 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49e97951-9430-48c5-afbd-6a9d32b4d53f" (UID: "49e97951-9430-48c5-afbd-6a9d32b4d53f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.933826 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49e97951-9430-48c5-afbd-6a9d32b4d53f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.948798 4826 generic.go:334] "Generic (PLEG): container finished" podID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerID="11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826" exitCode=0 Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.948879 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerDied","Data":"11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826"} Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.948930 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gltwm" event={"ID":"49e97951-9430-48c5-afbd-6a9d32b4d53f","Type":"ContainerDied","Data":"c9e50ab7f2a4471b934a260ac8bc882d1f64995f6f88ecf58e60b6abf9f23ac7"} Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.948955 4826 scope.go:117] "RemoveContainer" containerID="11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.949253 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gltwm" Jan 31 08:31:24 crc kubenswrapper[4826]: I0131 08:31:24.991063 4826 scope.go:117] "RemoveContainer" containerID="b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.010986 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gltwm"] Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.024279 4826 scope.go:117] "RemoveContainer" containerID="de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.030453 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gltwm"] Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.077626 4826 scope.go:117] "RemoveContainer" containerID="11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826" Jan 31 08:31:25 crc kubenswrapper[4826]: E0131 08:31:25.078084 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826\": container with ID starting with 11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826 not found: ID does not exist" containerID="11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.078128 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826"} err="failed to get container status \"11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826\": rpc error: code = NotFound desc = could not find container \"11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826\": container with ID starting with 11f7cd5d6cfcf8c8685b0da79f669862557ab96650115f3ecf48d29a19fe1826 not found: ID does not exist" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.078162 4826 scope.go:117] "RemoveContainer" containerID="b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259" Jan 31 08:31:25 crc kubenswrapper[4826]: E0131 08:31:25.078714 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259\": container with ID starting with b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259 not found: ID does not exist" containerID="b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.078817 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259"} err="failed to get container status \"b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259\": rpc error: code = NotFound desc = could not find container \"b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259\": container with ID starting with b8874107be330762d7920fa462f8cf83c620f77ded04a477251daeb5ca19c259 not found: ID does not exist" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.078901 4826 scope.go:117] "RemoveContainer" containerID="de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307" Jan 31 08:31:25 crc kubenswrapper[4826]: E0131 08:31:25.079558 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307\": container with ID starting with de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307 not found: ID does not exist" containerID="de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307" Jan 31 08:31:25 crc kubenswrapper[4826]: I0131 08:31:25.079598 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307"} err="failed to get container status \"de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307\": rpc error: code = NotFound desc = could not find container \"de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307\": container with ID starting with de554531d14c6c392dea3b8147b57cfb23fdf7aa99f522cea881e3fc00597307 not found: ID does not exist" Jan 31 08:31:26 crc kubenswrapper[4826]: I0131 08:31:26.821880 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" path="/var/lib/kubelet/pods/49e97951-9430-48c5-afbd-6a9d32b4d53f/volumes" Jan 31 08:31:30 crc kubenswrapper[4826]: I0131 08:31:30.808767 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:31:31 crc kubenswrapper[4826]: I0131 08:31:31.008603 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"d66f77141338dbf9f9a65ad204d3ad137a598bc0e00afc472fde079d4d886a05"} Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.109887 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-czdb5"] Jan 31 08:32:21 crc kubenswrapper[4826]: E0131 08:32:21.112478 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="extract-utilities" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.112582 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="extract-utilities" Jan 31 08:32:21 crc kubenswrapper[4826]: E0131 08:32:21.112678 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="registry-server" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.112792 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="registry-server" Jan 31 08:32:21 crc kubenswrapper[4826]: E0131 08:32:21.112891 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="extract-content" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.112959 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="extract-content" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.113308 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e97951-9430-48c5-afbd-6a9d32b4d53f" containerName="registry-server" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.115074 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.124574 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czdb5"] Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.155239 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-utilities\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.155307 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-catalog-content\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.155345 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4sg\" (UniqueName: \"kubernetes.io/projected/72590847-8cd1-4372-b90e-0234a265be31-kube-api-access-cs4sg\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.258097 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-utilities\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.258177 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-catalog-content\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.258216 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs4sg\" (UniqueName: \"kubernetes.io/projected/72590847-8cd1-4372-b90e-0234a265be31-kube-api-access-cs4sg\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.258648 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-utilities\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.258772 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-catalog-content\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.281847 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs4sg\" (UniqueName: \"kubernetes.io/projected/72590847-8cd1-4372-b90e-0234a265be31-kube-api-access-cs4sg\") pod \"community-operators-czdb5\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:21 crc kubenswrapper[4826]: I0131 08:32:21.440064 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:22 crc kubenswrapper[4826]: I0131 08:32:22.029223 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czdb5"] Jan 31 08:32:22 crc kubenswrapper[4826]: I0131 08:32:22.484149 4826 generic.go:334] "Generic (PLEG): container finished" podID="72590847-8cd1-4372-b90e-0234a265be31" containerID="85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7" exitCode=0 Jan 31 08:32:22 crc kubenswrapper[4826]: I0131 08:32:22.484219 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerDied","Data":"85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7"} Jan 31 08:32:22 crc kubenswrapper[4826]: I0131 08:32:22.484500 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerStarted","Data":"dfc19506a2093f8e87486461b7e43fda14a36df211df4f28d69289332ddb1b36"} Jan 31 08:32:25 crc kubenswrapper[4826]: I0131 08:32:25.510399 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerStarted","Data":"bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744"} Jan 31 08:32:29 crc kubenswrapper[4826]: I0131 08:32:29.556085 4826 generic.go:334] "Generic (PLEG): container finished" podID="72590847-8cd1-4372-b90e-0234a265be31" containerID="bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744" exitCode=0 Jan 31 08:32:29 crc kubenswrapper[4826]: I0131 08:32:29.556178 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerDied","Data":"bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744"} Jan 31 08:32:30 crc kubenswrapper[4826]: I0131 08:32:30.567588 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerStarted","Data":"ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186"} Jan 31 08:32:30 crc kubenswrapper[4826]: I0131 08:32:30.587117 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-czdb5" podStartSLOduration=2.106521053 podStartE2EDuration="9.587095008s" podCreationTimestamp="2026-01-31 08:32:21 +0000 UTC" firstStartedPulling="2026-01-31 08:32:22.486644562 +0000 UTC m=+3374.340530921" lastFinishedPulling="2026-01-31 08:32:29.967218517 +0000 UTC m=+3381.821104876" observedRunningTime="2026-01-31 08:32:30.581537558 +0000 UTC m=+3382.435423937" watchObservedRunningTime="2026-01-31 08:32:30.587095008 +0000 UTC m=+3382.440981367" Jan 31 08:32:31 crc kubenswrapper[4826]: I0131 08:32:31.440603 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:31 crc kubenswrapper[4826]: I0131 08:32:31.440697 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:32 crc kubenswrapper[4826]: I0131 08:32:32.484119 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-czdb5" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="registry-server" probeResult="failure" output=< Jan 31 08:32:32 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:32:32 crc kubenswrapper[4826]: > Jan 31 08:32:41 crc kubenswrapper[4826]: I0131 08:32:41.495692 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:41 crc kubenswrapper[4826]: I0131 08:32:41.556933 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:42 crc kubenswrapper[4826]: I0131 08:32:42.077864 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czdb5"] Jan 31 08:32:42 crc kubenswrapper[4826]: I0131 08:32:42.690299 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-czdb5" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="registry-server" containerID="cri-o://ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186" gracePeriod=2 Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.188983 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.220177 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs4sg\" (UniqueName: \"kubernetes.io/projected/72590847-8cd1-4372-b90e-0234a265be31-kube-api-access-cs4sg\") pod \"72590847-8cd1-4372-b90e-0234a265be31\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.220659 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-utilities\") pod \"72590847-8cd1-4372-b90e-0234a265be31\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.220701 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-catalog-content\") pod \"72590847-8cd1-4372-b90e-0234a265be31\" (UID: \"72590847-8cd1-4372-b90e-0234a265be31\") " Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.222437 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-utilities" (OuterVolumeSpecName: "utilities") pod "72590847-8cd1-4372-b90e-0234a265be31" (UID: "72590847-8cd1-4372-b90e-0234a265be31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.234341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72590847-8cd1-4372-b90e-0234a265be31-kube-api-access-cs4sg" (OuterVolumeSpecName: "kube-api-access-cs4sg") pod "72590847-8cd1-4372-b90e-0234a265be31" (UID: "72590847-8cd1-4372-b90e-0234a265be31"). InnerVolumeSpecName "kube-api-access-cs4sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.297018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72590847-8cd1-4372-b90e-0234a265be31" (UID: "72590847-8cd1-4372-b90e-0234a265be31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.324452 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs4sg\" (UniqueName: \"kubernetes.io/projected/72590847-8cd1-4372-b90e-0234a265be31-kube-api-access-cs4sg\") on node \"crc\" DevicePath \"\"" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.324516 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.324528 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72590847-8cd1-4372-b90e-0234a265be31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.705005 4826 generic.go:334] "Generic (PLEG): container finished" podID="72590847-8cd1-4372-b90e-0234a265be31" containerID="ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186" exitCode=0 Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.705078 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerDied","Data":"ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186"} Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.705107 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czdb5" event={"ID":"72590847-8cd1-4372-b90e-0234a265be31","Type":"ContainerDied","Data":"dfc19506a2093f8e87486461b7e43fda14a36df211df4f28d69289332ddb1b36"} Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.705140 4826 scope.go:117] "RemoveContainer" containerID="ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.705133 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czdb5" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.727187 4826 scope.go:117] "RemoveContainer" containerID="bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.751912 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czdb5"] Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.763533 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-czdb5"] Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.772634 4826 scope.go:117] "RemoveContainer" containerID="85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.806317 4826 scope.go:117] "RemoveContainer" containerID="ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186" Jan 31 08:32:43 crc kubenswrapper[4826]: E0131 08:32:43.806892 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186\": container with ID starting with ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186 not found: ID does not exist" containerID="ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.806992 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186"} err="failed to get container status \"ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186\": rpc error: code = NotFound desc = could not find container \"ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186\": container with ID starting with ac6fc06d5aeb0a9804e55a4f78d8b962c7bf5dba7789175d51859425dbe4d186 not found: ID does not exist" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.807039 4826 scope.go:117] "RemoveContainer" containerID="bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744" Jan 31 08:32:43 crc kubenswrapper[4826]: E0131 08:32:43.807439 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744\": container with ID starting with bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744 not found: ID does not exist" containerID="bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.807507 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744"} err="failed to get container status \"bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744\": rpc error: code = NotFound desc = could not find container \"bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744\": container with ID starting with bd6af3ac4c7be971b8a1f92042a96730ce8371e2b889a09a7ed28e65d1d4c744 not found: ID does not exist" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.807539 4826 scope.go:117] "RemoveContainer" containerID="85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7" Jan 31 08:32:43 crc kubenswrapper[4826]: E0131 08:32:43.807786 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7\": container with ID starting with 85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7 not found: ID does not exist" containerID="85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7" Jan 31 08:32:43 crc kubenswrapper[4826]: I0131 08:32:43.807820 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7"} err="failed to get container status \"85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7\": rpc error: code = NotFound desc = could not find container \"85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7\": container with ID starting with 85e14ffea06c30aed6776c0106009d9f41635be84bb8bff3e3ab9bbb3c2f55e7 not found: ID does not exist" Jan 31 08:32:44 crc kubenswrapper[4826]: I0131 08:32:44.819033 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72590847-8cd1-4372-b90e-0234a265be31" path="/var/lib/kubelet/pods/72590847-8cd1-4372-b90e-0234a265be31/volumes" Jan 31 08:33:57 crc kubenswrapper[4826]: I0131 08:33:57.377726 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:33:57 crc kubenswrapper[4826]: I0131 08:33:57.378985 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:34:27 crc kubenswrapper[4826]: I0131 08:34:27.376670 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:34:27 crc kubenswrapper[4826]: I0131 08:34:27.377177 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:34:28 crc kubenswrapper[4826]: I0131 08:34:28.054501 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-1c25-account-create-update-xsw6h"] Jan 31 08:34:28 crc kubenswrapper[4826]: I0131 08:34:28.070950 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-6kb8c"] Jan 31 08:34:28 crc kubenswrapper[4826]: I0131 08:34:28.082089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-1c25-account-create-update-xsw6h"] Jan 31 08:34:28 crc kubenswrapper[4826]: I0131 08:34:28.091281 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-6kb8c"] Jan 31 08:34:28 crc kubenswrapper[4826]: I0131 08:34:28.821115 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6e6e62-07ea-463d-9b6e-b7980b0c51b1" path="/var/lib/kubelet/pods/4c6e6e62-07ea-463d-9b6e-b7980b0c51b1/volumes" Jan 31 08:34:28 crc kubenswrapper[4826]: I0131 08:34:28.822472 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22c6c41-3d9d-4e39-b13c-c95542716ed2" path="/var/lib/kubelet/pods/d22c6c41-3d9d-4e39-b13c-c95542716ed2/volumes" Jan 31 08:34:33 crc kubenswrapper[4826]: I0131 08:34:33.667843 4826 scope.go:117] "RemoveContainer" containerID="20356d0e4ad5c77a600807d8cec552f292b2910997ef84f55c0742936f8d241c" Jan 31 08:34:33 crc kubenswrapper[4826]: I0131 08:34:33.702266 4826 scope.go:117] "RemoveContainer" containerID="a95f95a05bb4a61e2d1b613e0f74dd18a238357187d4185928a9f8bd1c78e574" Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.377317 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.378093 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.378177 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.379281 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d66f77141338dbf9f9a65ad204d3ad137a598bc0e00afc472fde079d4d886a05"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.379379 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://d66f77141338dbf9f9a65ad204d3ad137a598bc0e00afc472fde079d4d886a05" gracePeriod=600 Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.972223 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="d66f77141338dbf9f9a65ad204d3ad137a598bc0e00afc472fde079d4d886a05" exitCode=0 Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.972545 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"d66f77141338dbf9f9a65ad204d3ad137a598bc0e00afc472fde079d4d886a05"} Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.972930 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e"} Jan 31 08:34:57 crc kubenswrapper[4826]: I0131 08:34:57.972981 4826 scope.go:117] "RemoveContainer" containerID="97a91a043dac6fca9c2206d6ce242261d9f98e6fce522f81be863d2f1501deea" Jan 31 08:35:03 crc kubenswrapper[4826]: I0131 08:35:03.054210 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-8br82"] Jan 31 08:35:03 crc kubenswrapper[4826]: I0131 08:35:03.070166 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-8br82"] Jan 31 08:35:04 crc kubenswrapper[4826]: I0131 08:35:04.820792 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dc0f2a-20c1-4913-880c-dd9c2046e096" path="/var/lib/kubelet/pods/28dc0f2a-20c1-4913-880c-dd9c2046e096/volumes" Jan 31 08:35:17 crc kubenswrapper[4826]: I0131 08:35:17.191734 4826 generic.go:334] "Generic (PLEG): container finished" podID="e068e101-7fa0-42fa-b34b-fb9ba93466aa" containerID="45145f14ded6d40a4b73423be9c97d59626c50b7fe2b67a67b1ab090ce17e78e" exitCode=1 Jan 31 08:35:17 crc kubenswrapper[4826]: I0131 08:35:17.191830 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"e068e101-7fa0-42fa-b34b-fb9ba93466aa","Type":"ContainerDied","Data":"45145f14ded6d40a4b73423be9c97d59626c50b7fe2b67a67b1ab090ce17e78e"} Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.704745 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.780933 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 31 08:35:18 crc kubenswrapper[4826]: E0131 08:35:18.781442 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e068e101-7fa0-42fa-b34b-fb9ba93466aa" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.781465 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e068e101-7fa0-42fa-b34b-fb9ba93466aa" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:35:18 crc kubenswrapper[4826]: E0131 08:35:18.781475 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="extract-utilities" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.781483 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="extract-utilities" Jan 31 08:35:18 crc kubenswrapper[4826]: E0131 08:35:18.781514 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="registry-server" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.781523 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="registry-server" Jan 31 08:35:18 crc kubenswrapper[4826]: E0131 08:35:18.781550 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="extract-content" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.781558 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="extract-content" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.781947 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e068e101-7fa0-42fa-b34b-fb9ba93466aa" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.781994 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="72590847-8cd1-4372-b90e-0234a265be31" containerName="registry-server" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.782820 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.786185 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.788816 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.796779 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.888672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ceph\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.888736 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.888769 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config-secret\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.888877 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-workdir\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.888911 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ca-certs\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.888960 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgnlw\" (UniqueName: \"kubernetes.io/projected/e068e101-7fa0-42fa-b34b-fb9ba93466aa-kube-api-access-pgnlw\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889006 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-temporary\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889072 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889110 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ssh-key\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889161 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-config-data\") pod \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\" (UID: \"e068e101-7fa0-42fa-b34b-fb9ba93466aa\") " Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889479 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvtxf\" (UniqueName: \"kubernetes.io/projected/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-kube-api-access-pvtxf\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889557 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889648 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889815 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889838 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.889859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.890712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.891165 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-config-data" (OuterVolumeSpecName: "config-data") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.895452 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.895569 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e068e101-7fa0-42fa-b34b-fb9ba93466aa-kube-api-access-pgnlw" (OuterVolumeSpecName: "kube-api-access-pgnlw") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "kube-api-access-pgnlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.903832 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.908226 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ceph" (OuterVolumeSpecName: "ceph") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.924866 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.928039 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.939582 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.943786 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e068e101-7fa0-42fa-b34b-fb9ba93466aa" (UID: "e068e101-7fa0-42fa-b34b-fb9ba93466aa"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992257 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992303 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992320 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992410 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvtxf\" (UniqueName: \"kubernetes.io/projected/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-kube-api-access-pvtxf\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992458 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992478 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992514 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992566 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992628 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992691 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992702 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992712 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992722 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e068e101-7fa0-42fa-b34b-fb9ba93466aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992732 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992744 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992757 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e068e101-7fa0-42fa-b34b-fb9ba93466aa-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992765 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e068e101-7fa0-42fa-b34b-fb9ba93466aa-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.992775 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgnlw\" (UniqueName: \"kubernetes.io/projected/e068e101-7fa0-42fa-b34b-fb9ba93466aa-kube-api-access-pgnlw\") on node \"crc\" DevicePath \"\"" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.993913 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.995164 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.996769 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.996815 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.999145 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.999436 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:18 crc kubenswrapper[4826]: I0131 08:35:18.999795 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.000477 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.012284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvtxf\" (UniqueName: \"kubernetes.io/projected/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-kube-api-access-pvtxf\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.028053 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.110660 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.217219 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"e068e101-7fa0-42fa-b34b-fb9ba93466aa","Type":"ContainerDied","Data":"f0499a2af22dcccd7d569529082e693ecd452c59a85bc6712b9cadd1e93dd382"} Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.217518 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0499a2af22dcccd7d569529082e693ecd452c59a85bc6712b9cadd1e93dd382" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.217327 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 31 08:35:19 crc kubenswrapper[4826]: I0131 08:35:19.676733 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 31 08:35:20 crc kubenswrapper[4826]: I0131 08:35:20.242272 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"c92b3fc7-df90-4a08-bb1b-aebb65e316b8","Type":"ContainerStarted","Data":"3cc48f734cdf1177a9f249ac43ca4fa5977d9597d5a105f1ee2b412158e2c7a7"} Jan 31 08:35:21 crc kubenswrapper[4826]: I0131 08:35:21.256237 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"c92b3fc7-df90-4a08-bb1b-aebb65e316b8","Type":"ContainerStarted","Data":"32527760756f623f7ef36cfb7cc2f3ce36205ca7732b15409555f9c8ecdb8afc"} Jan 31 08:35:21 crc kubenswrapper[4826]: I0131 08:35:21.287434 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=3.287410398 podStartE2EDuration="3.287410398s" podCreationTimestamp="2026-01-31 08:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:35:21.276117411 +0000 UTC m=+3553.130003770" watchObservedRunningTime="2026-01-31 08:35:21.287410398 +0000 UTC m=+3553.141296767" Jan 31 08:35:33 crc kubenswrapper[4826]: I0131 08:35:33.820305 4826 scope.go:117] "RemoveContainer" containerID="e9d5a2e49b1e4665b8404c72033ff7dbdd31bad6e9ec446a677837505ae10d9a" Jan 31 08:36:57 crc kubenswrapper[4826]: I0131 08:36:57.377612 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:36:57 crc kubenswrapper[4826]: I0131 08:36:57.378351 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:37:27 crc kubenswrapper[4826]: I0131 08:37:27.377294 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:37:27 crc kubenswrapper[4826]: I0131 08:37:27.378098 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.383094 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.383806 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.383878 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.385615 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.385698 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" gracePeriod=600 Jan 31 08:37:57 crc kubenswrapper[4826]: E0131 08:37:57.512952 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.824638 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" exitCode=0 Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.824691 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e"} Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.824732 4826 scope.go:117] "RemoveContainer" containerID="d66f77141338dbf9f9a65ad204d3ad137a598bc0e00afc472fde079d4d886a05" Jan 31 08:37:57 crc kubenswrapper[4826]: I0131 08:37:57.825527 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:37:57 crc kubenswrapper[4826]: E0131 08:37:57.825848 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:38:10 crc kubenswrapper[4826]: I0131 08:38:10.808934 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:38:10 crc kubenswrapper[4826]: E0131 08:38:10.809775 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:38:23 crc kubenswrapper[4826]: I0131 08:38:23.809158 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:38:23 crc kubenswrapper[4826]: E0131 08:38:23.809809 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:38:35 crc kubenswrapper[4826]: I0131 08:38:35.808789 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:38:35 crc kubenswrapper[4826]: E0131 08:38:35.809489 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:38:48 crc kubenswrapper[4826]: I0131 08:38:48.827473 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:38:48 crc kubenswrapper[4826]: E0131 08:38:48.831495 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:38:59 crc kubenswrapper[4826]: I0131 08:38:59.808542 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:38:59 crc kubenswrapper[4826]: E0131 08:38:59.809370 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:39:10 crc kubenswrapper[4826]: I0131 08:39:10.813160 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:39:10 crc kubenswrapper[4826]: E0131 08:39:10.814874 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.564835 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w68mn"] Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.572384 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.578676 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w68mn"] Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.671681 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-utilities\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.671755 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-catalog-content\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.671911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spchd\" (UniqueName: \"kubernetes.io/projected/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-kube-api-access-spchd\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.773444 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-utilities\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.773508 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-catalog-content\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.773594 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spchd\" (UniqueName: \"kubernetes.io/projected/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-kube-api-access-spchd\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.774151 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-utilities\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.774215 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-catalog-content\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.799042 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spchd\" (UniqueName: \"kubernetes.io/projected/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-kube-api-access-spchd\") pod \"certified-operators-w68mn\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:21 crc kubenswrapper[4826]: I0131 08:39:21.909953 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:22 crc kubenswrapper[4826]: I0131 08:39:22.462878 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w68mn"] Jan 31 08:39:22 crc kubenswrapper[4826]: I0131 08:39:22.814685 4826 generic.go:334] "Generic (PLEG): container finished" podID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerID="7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12" exitCode=0 Jan 31 08:39:22 crc kubenswrapper[4826]: I0131 08:39:22.818380 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:39:22 crc kubenswrapper[4826]: I0131 08:39:22.823203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerDied","Data":"7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12"} Jan 31 08:39:22 crc kubenswrapper[4826]: I0131 08:39:22.823252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerStarted","Data":"6da5affe0e2f397f60c7ca658f8c88c173a87ed2db0ee05583725c351981acce"} Jan 31 08:39:23 crc kubenswrapper[4826]: I0131 08:39:23.824239 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerStarted","Data":"cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500"} Jan 31 08:39:24 crc kubenswrapper[4826]: I0131 08:39:24.808911 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:39:24 crc kubenswrapper[4826]: E0131 08:39:24.809509 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:39:24 crc kubenswrapper[4826]: I0131 08:39:24.840026 4826 generic.go:334] "Generic (PLEG): container finished" podID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerID="cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500" exitCode=0 Jan 31 08:39:24 crc kubenswrapper[4826]: I0131 08:39:24.840092 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerDied","Data":"cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500"} Jan 31 08:39:26 crc kubenswrapper[4826]: I0131 08:39:26.860324 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerStarted","Data":"ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c"} Jan 31 08:39:26 crc kubenswrapper[4826]: I0131 08:39:26.889648 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w68mn" podStartSLOduration=3.238328703 podStartE2EDuration="5.889622403s" podCreationTimestamp="2026-01-31 08:39:21 +0000 UTC" firstStartedPulling="2026-01-31 08:39:22.818192979 +0000 UTC m=+3794.672079338" lastFinishedPulling="2026-01-31 08:39:25.469486689 +0000 UTC m=+3797.323373038" observedRunningTime="2026-01-31 08:39:26.882192668 +0000 UTC m=+3798.736079027" watchObservedRunningTime="2026-01-31 08:39:26.889622403 +0000 UTC m=+3798.743508762" Jan 31 08:39:31 crc kubenswrapper[4826]: I0131 08:39:31.910504 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:31 crc kubenswrapper[4826]: I0131 08:39:31.911166 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:31 crc kubenswrapper[4826]: I0131 08:39:31.965167 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:32 crc kubenswrapper[4826]: I0131 08:39:32.020579 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:32 crc kubenswrapper[4826]: I0131 08:39:32.206752 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w68mn"] Jan 31 08:39:33 crc kubenswrapper[4826]: I0131 08:39:33.951660 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w68mn" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="registry-server" containerID="cri-o://ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c" gracePeriod=2 Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.429593 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.535389 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spchd\" (UniqueName: \"kubernetes.io/projected/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-kube-api-access-spchd\") pod \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.535524 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-catalog-content\") pod \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.537198 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-utilities\") pod \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\" (UID: \"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e\") " Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.537926 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-utilities" (OuterVolumeSpecName: "utilities") pod "e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" (UID: "e7d0b813-f13f-4ce9-aeea-9c7b4f15691e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.544047 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-kube-api-access-spchd" (OuterVolumeSpecName: "kube-api-access-spchd") pod "e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" (UID: "e7d0b813-f13f-4ce9-aeea-9c7b4f15691e"). InnerVolumeSpecName "kube-api-access-spchd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.592309 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" (UID: "e7d0b813-f13f-4ce9-aeea-9c7b4f15691e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.639573 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spchd\" (UniqueName: \"kubernetes.io/projected/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-kube-api-access-spchd\") on node \"crc\" DevicePath \"\"" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.639619 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.639630 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.963489 4826 generic.go:334] "Generic (PLEG): container finished" podID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerID="ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c" exitCode=0 Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.963818 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerDied","Data":"ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c"} Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.963953 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w68mn" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.963944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w68mn" event={"ID":"e7d0b813-f13f-4ce9-aeea-9c7b4f15691e","Type":"ContainerDied","Data":"6da5affe0e2f397f60c7ca658f8c88c173a87ed2db0ee05583725c351981acce"} Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.963969 4826 scope.go:117] "RemoveContainer" containerID="ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.993270 4826 scope.go:117] "RemoveContainer" containerID="cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500" Jan 31 08:39:34 crc kubenswrapper[4826]: I0131 08:39:34.999481 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w68mn"] Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.010400 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w68mn"] Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.012920 4826 scope.go:117] "RemoveContainer" containerID="7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12" Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.063564 4826 scope.go:117] "RemoveContainer" containerID="ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c" Jan 31 08:39:35 crc kubenswrapper[4826]: E0131 08:39:35.064318 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c\": container with ID starting with ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c not found: ID does not exist" containerID="ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c" Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.064358 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c"} err="failed to get container status \"ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c\": rpc error: code = NotFound desc = could not find container \"ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c\": container with ID starting with ceb4d27f4dc38100480086c2f83ee98ceb7e7432cd3748c55a45d9abfce5d52c not found: ID does not exist" Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.064393 4826 scope.go:117] "RemoveContainer" containerID="cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500" Jan 31 08:39:35 crc kubenswrapper[4826]: E0131 08:39:35.066584 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500\": container with ID starting with cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500 not found: ID does not exist" containerID="cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500" Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.066609 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500"} err="failed to get container status \"cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500\": rpc error: code = NotFound desc = could not find container \"cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500\": container with ID starting with cbad1a80706ab9143eb1797eaeeb4f2e22f4890443369a78d48bb75e4158b500 not found: ID does not exist" Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.066627 4826 scope.go:117] "RemoveContainer" containerID="7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12" Jan 31 08:39:35 crc kubenswrapper[4826]: E0131 08:39:35.066879 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12\": container with ID starting with 7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12 not found: ID does not exist" containerID="7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12" Jan 31 08:39:35 crc kubenswrapper[4826]: I0131 08:39:35.066913 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12"} err="failed to get container status \"7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12\": rpc error: code = NotFound desc = could not find container \"7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12\": container with ID starting with 7dff9a24086f5d3d08df9caed1bb78d826ac87db791e5653841c8745c0a6fd12 not found: ID does not exist" Jan 31 08:39:36 crc kubenswrapper[4826]: I0131 08:39:36.810223 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:39:36 crc kubenswrapper[4826]: E0131 08:39:36.810786 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:39:36 crc kubenswrapper[4826]: I0131 08:39:36.824629 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" path="/var/lib/kubelet/pods/e7d0b813-f13f-4ce9-aeea-9c7b4f15691e/volumes" Jan 31 08:39:47 crc kubenswrapper[4826]: I0131 08:39:47.809033 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:39:47 crc kubenswrapper[4826]: E0131 08:39:47.809939 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:39:59 crc kubenswrapper[4826]: I0131 08:39:59.809928 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:39:59 crc kubenswrapper[4826]: E0131 08:39:59.811192 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:40:13 crc kubenswrapper[4826]: I0131 08:40:13.809626 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:40:13 crc kubenswrapper[4826]: E0131 08:40:13.810701 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:40:24 crc kubenswrapper[4826]: I0131 08:40:24.810108 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:40:24 crc kubenswrapper[4826]: E0131 08:40:24.811906 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:40:36 crc kubenswrapper[4826]: I0131 08:40:36.810149 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:40:36 crc kubenswrapper[4826]: E0131 08:40:36.812024 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:40:51 crc kubenswrapper[4826]: I0131 08:40:51.809179 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:40:51 crc kubenswrapper[4826]: E0131 08:40:51.810118 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:41:03 crc kubenswrapper[4826]: I0131 08:41:03.809576 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:41:03 crc kubenswrapper[4826]: E0131 08:41:03.810446 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:41:14 crc kubenswrapper[4826]: I0131 08:41:14.808721 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:41:14 crc kubenswrapper[4826]: E0131 08:41:14.809515 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.675218 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6dfn"] Jan 31 08:41:15 crc kubenswrapper[4826]: E0131 08:41:15.676160 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="extract-utilities" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.676273 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="extract-utilities" Jan 31 08:41:15 crc kubenswrapper[4826]: E0131 08:41:15.676360 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="extract-content" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.676438 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="extract-content" Jan 31 08:41:15 crc kubenswrapper[4826]: E0131 08:41:15.676530 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="registry-server" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.676610 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="registry-server" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.676947 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d0b813-f13f-4ce9-aeea-9c7b4f15691e" containerName="registry-server" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.678661 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.700938 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6dfn"] Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.875754 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tkz\" (UniqueName: \"kubernetes.io/projected/a350d236-c623-4670-b7ef-149f94aa0655-kube-api-access-84tkz\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.875861 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-utilities\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.876003 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-catalog-content\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.978176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-catalog-content\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.978309 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84tkz\" (UniqueName: \"kubernetes.io/projected/a350d236-c623-4670-b7ef-149f94aa0655-kube-api-access-84tkz\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.978405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-utilities\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.978895 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-catalog-content\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.978950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-utilities\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:15 crc kubenswrapper[4826]: I0131 08:41:15.998179 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84tkz\" (UniqueName: \"kubernetes.io/projected/a350d236-c623-4670-b7ef-149f94aa0655-kube-api-access-84tkz\") pod \"redhat-operators-p6dfn\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:16 crc kubenswrapper[4826]: I0131 08:41:16.297415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:16 crc kubenswrapper[4826]: I0131 08:41:16.787353 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6dfn"] Jan 31 08:41:16 crc kubenswrapper[4826]: I0131 08:41:16.961232 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerStarted","Data":"28457eb0c5bececca7ad865b99e223f361d094ccaeb80c7625f47ed90c23b065"} Jan 31 08:41:17 crc kubenswrapper[4826]: I0131 08:41:17.978946 4826 generic.go:334] "Generic (PLEG): container finished" podID="a350d236-c623-4670-b7ef-149f94aa0655" containerID="af9a56278f2438c4a229400e41e341c2b493f4c7907e07a4d7fd329a0d30babf" exitCode=0 Jan 31 08:41:17 crc kubenswrapper[4826]: I0131 08:41:17.979260 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerDied","Data":"af9a56278f2438c4a229400e41e341c2b493f4c7907e07a4d7fd329a0d30babf"} Jan 31 08:41:20 crc kubenswrapper[4826]: I0131 08:41:20.026627 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerStarted","Data":"61eba656fff936bb69e6a7b524fbd0321a3f7724743c038f34d2f4be3b950726"} Jan 31 08:41:20 crc kubenswrapper[4826]: E0131 08:41:20.475520 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda350d236_c623_4670_b7ef_149f94aa0655.slice/crio-61eba656fff936bb69e6a7b524fbd0321a3f7724743c038f34d2f4be3b950726.scope\": RecentStats: unable to find data in memory cache]" Jan 31 08:41:21 crc kubenswrapper[4826]: I0131 08:41:21.040318 4826 generic.go:334] "Generic (PLEG): container finished" podID="a350d236-c623-4670-b7ef-149f94aa0655" containerID="61eba656fff936bb69e6a7b524fbd0321a3f7724743c038f34d2f4be3b950726" exitCode=0 Jan 31 08:41:21 crc kubenswrapper[4826]: I0131 08:41:21.040390 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerDied","Data":"61eba656fff936bb69e6a7b524fbd0321a3f7724743c038f34d2f4be3b950726"} Jan 31 08:41:22 crc kubenswrapper[4826]: I0131 08:41:22.057662 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerStarted","Data":"8a71ea4567f878da43bef7c2407425103ca09d2c23d1b59be8fcf228e3f0481d"} Jan 31 08:41:23 crc kubenswrapper[4826]: I0131 08:41:23.082468 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6dfn" podStartSLOduration=4.432393262 podStartE2EDuration="8.082444112s" podCreationTimestamp="2026-01-31 08:41:15 +0000 UTC" firstStartedPulling="2026-01-31 08:41:17.981523127 +0000 UTC m=+3909.835409486" lastFinishedPulling="2026-01-31 08:41:21.631573977 +0000 UTC m=+3913.485460336" observedRunningTime="2026-01-31 08:41:23.080453085 +0000 UTC m=+3914.934339454" watchObservedRunningTime="2026-01-31 08:41:23.082444112 +0000 UTC m=+3914.936330471" Jan 31 08:41:25 crc kubenswrapper[4826]: I0131 08:41:25.810477 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:41:25 crc kubenswrapper[4826]: E0131 08:41:25.811799 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:41:26 crc kubenswrapper[4826]: I0131 08:41:26.298148 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:26 crc kubenswrapper[4826]: I0131 08:41:26.298222 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:27 crc kubenswrapper[4826]: I0131 08:41:27.350413 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p6dfn" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="registry-server" probeResult="failure" output=< Jan 31 08:41:27 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 31 08:41:27 crc kubenswrapper[4826]: > Jan 31 08:41:36 crc kubenswrapper[4826]: I0131 08:41:36.344888 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:36 crc kubenswrapper[4826]: I0131 08:41:36.405208 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:36 crc kubenswrapper[4826]: I0131 08:41:36.580471 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6dfn"] Jan 31 08:41:38 crc kubenswrapper[4826]: I0131 08:41:38.200244 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p6dfn" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="registry-server" containerID="cri-o://8a71ea4567f878da43bef7c2407425103ca09d2c23d1b59be8fcf228e3f0481d" gracePeriod=2 Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.211880 4826 generic.go:334] "Generic (PLEG): container finished" podID="a350d236-c623-4670-b7ef-149f94aa0655" containerID="8a71ea4567f878da43bef7c2407425103ca09d2c23d1b59be8fcf228e3f0481d" exitCode=0 Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.211911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerDied","Data":"8a71ea4567f878da43bef7c2407425103ca09d2c23d1b59be8fcf228e3f0481d"} Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.212577 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6dfn" event={"ID":"a350d236-c623-4670-b7ef-149f94aa0655","Type":"ContainerDied","Data":"28457eb0c5bececca7ad865b99e223f361d094ccaeb80c7625f47ed90c23b065"} Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.212598 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28457eb0c5bececca7ad865b99e223f361d094ccaeb80c7625f47ed90c23b065" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.271395 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.392295 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-catalog-content\") pod \"a350d236-c623-4670-b7ef-149f94aa0655\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.392460 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-utilities\") pod \"a350d236-c623-4670-b7ef-149f94aa0655\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.392707 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84tkz\" (UniqueName: \"kubernetes.io/projected/a350d236-c623-4670-b7ef-149f94aa0655-kube-api-access-84tkz\") pod \"a350d236-c623-4670-b7ef-149f94aa0655\" (UID: \"a350d236-c623-4670-b7ef-149f94aa0655\") " Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.396235 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-utilities" (OuterVolumeSpecName: "utilities") pod "a350d236-c623-4670-b7ef-149f94aa0655" (UID: "a350d236-c623-4670-b7ef-149f94aa0655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.401379 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a350d236-c623-4670-b7ef-149f94aa0655-kube-api-access-84tkz" (OuterVolumeSpecName: "kube-api-access-84tkz") pod "a350d236-c623-4670-b7ef-149f94aa0655" (UID: "a350d236-c623-4670-b7ef-149f94aa0655"). InnerVolumeSpecName "kube-api-access-84tkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.494864 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84tkz\" (UniqueName: \"kubernetes.io/projected/a350d236-c623-4670-b7ef-149f94aa0655-kube-api-access-84tkz\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.495227 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.544416 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a350d236-c623-4670-b7ef-149f94aa0655" (UID: "a350d236-c623-4670-b7ef-149f94aa0655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.597621 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a350d236-c623-4670-b7ef-149f94aa0655-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:41:39 crc kubenswrapper[4826]: I0131 08:41:39.809226 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:41:39 crc kubenswrapper[4826]: E0131 08:41:39.809691 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:41:40 crc kubenswrapper[4826]: I0131 08:41:40.221577 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6dfn" Jan 31 08:41:40 crc kubenswrapper[4826]: I0131 08:41:40.265463 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6dfn"] Jan 31 08:41:40 crc kubenswrapper[4826]: I0131 08:41:40.275060 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p6dfn"] Jan 31 08:41:40 crc kubenswrapper[4826]: I0131 08:41:40.822857 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a350d236-c623-4670-b7ef-149f94aa0655" path="/var/lib/kubelet/pods/a350d236-c623-4670-b7ef-149f94aa0655/volumes" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.847474 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7xkz"] Jan 31 08:41:47 crc kubenswrapper[4826]: E0131 08:41:47.850671 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="extract-utilities" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.850794 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="extract-utilities" Jan 31 08:41:47 crc kubenswrapper[4826]: E0131 08:41:47.850876 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="extract-content" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.850943 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="extract-content" Jan 31 08:41:47 crc kubenswrapper[4826]: E0131 08:41:47.851038 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="registry-server" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.851106 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="registry-server" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.851421 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a350d236-c623-4670-b7ef-149f94aa0655" containerName="registry-server" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.853178 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.864867 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7xkz"] Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.867107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-utilities\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.867363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgljf\" (UniqueName: \"kubernetes.io/projected/7247db65-8cc1-4f51-b9b6-543a177b945b-kube-api-access-xgljf\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.867393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-catalog-content\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.969087 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-utilities\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.969184 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgljf\" (UniqueName: \"kubernetes.io/projected/7247db65-8cc1-4f51-b9b6-543a177b945b-kube-api-access-xgljf\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.969206 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-catalog-content\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.969641 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-catalog-content\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.970043 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-utilities\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:47 crc kubenswrapper[4826]: I0131 08:41:47.990949 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgljf\" (UniqueName: \"kubernetes.io/projected/7247db65-8cc1-4f51-b9b6-543a177b945b-kube-api-access-xgljf\") pod \"redhat-marketplace-w7xkz\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:48 crc kubenswrapper[4826]: I0131 08:41:48.199678 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:48 crc kubenswrapper[4826]: I0131 08:41:48.899297 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7xkz"] Jan 31 08:41:49 crc kubenswrapper[4826]: I0131 08:41:49.316341 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerStarted","Data":"2eb6a03625fa9fd63fa022cdf5e29595b6fc14a79086a4cd74de9dfcebe00e15"} Jan 31 08:41:50 crc kubenswrapper[4826]: I0131 08:41:50.329111 4826 generic.go:334] "Generic (PLEG): container finished" podID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerID="3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0" exitCode=0 Jan 31 08:41:50 crc kubenswrapper[4826]: I0131 08:41:50.329224 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerDied","Data":"3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0"} Jan 31 08:41:51 crc kubenswrapper[4826]: I0131 08:41:51.341034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerStarted","Data":"bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0"} Jan 31 08:41:52 crc kubenswrapper[4826]: I0131 08:41:52.350331 4826 generic.go:334] "Generic (PLEG): container finished" podID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerID="bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0" exitCode=0 Jan 31 08:41:52 crc kubenswrapper[4826]: I0131 08:41:52.350389 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerDied","Data":"bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0"} Jan 31 08:41:52 crc kubenswrapper[4826]: I0131 08:41:52.810185 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:41:52 crc kubenswrapper[4826]: E0131 08:41:52.810665 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:41:53 crc kubenswrapper[4826]: I0131 08:41:53.361652 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerStarted","Data":"8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878"} Jan 31 08:41:53 crc kubenswrapper[4826]: I0131 08:41:53.396073 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7xkz" podStartSLOduration=3.95441283 podStartE2EDuration="6.396047714s" podCreationTimestamp="2026-01-31 08:41:47 +0000 UTC" firstStartedPulling="2026-01-31 08:41:50.332641137 +0000 UTC m=+3942.186527496" lastFinishedPulling="2026-01-31 08:41:52.774276021 +0000 UTC m=+3944.628162380" observedRunningTime="2026-01-31 08:41:53.381453601 +0000 UTC m=+3945.235339980" watchObservedRunningTime="2026-01-31 08:41:53.396047714 +0000 UTC m=+3945.249934093" Jan 31 08:41:58 crc kubenswrapper[4826]: I0131 08:41:58.199932 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:58 crc kubenswrapper[4826]: I0131 08:41:58.200809 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:58 crc kubenswrapper[4826]: I0131 08:41:58.262903 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:58 crc kubenswrapper[4826]: I0131 08:41:58.475626 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:41:58 crc kubenswrapper[4826]: I0131 08:41:58.567736 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7xkz"] Jan 31 08:42:00 crc kubenswrapper[4826]: I0131 08:42:00.427924 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7xkz" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="registry-server" containerID="cri-o://8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878" gracePeriod=2 Jan 31 08:42:00 crc kubenswrapper[4826]: I0131 08:42:00.946421 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.063675 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgljf\" (UniqueName: \"kubernetes.io/projected/7247db65-8cc1-4f51-b9b6-543a177b945b-kube-api-access-xgljf\") pod \"7247db65-8cc1-4f51-b9b6-543a177b945b\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.063796 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-catalog-content\") pod \"7247db65-8cc1-4f51-b9b6-543a177b945b\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.063860 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-utilities\") pod \"7247db65-8cc1-4f51-b9b6-543a177b945b\" (UID: \"7247db65-8cc1-4f51-b9b6-543a177b945b\") " Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.065324 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-utilities" (OuterVolumeSpecName: "utilities") pod "7247db65-8cc1-4f51-b9b6-543a177b945b" (UID: "7247db65-8cc1-4f51-b9b6-543a177b945b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.071134 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7247db65-8cc1-4f51-b9b6-543a177b945b-kube-api-access-xgljf" (OuterVolumeSpecName: "kube-api-access-xgljf") pod "7247db65-8cc1-4f51-b9b6-543a177b945b" (UID: "7247db65-8cc1-4f51-b9b6-543a177b945b"). InnerVolumeSpecName "kube-api-access-xgljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.098613 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7247db65-8cc1-4f51-b9b6-543a177b945b" (UID: "7247db65-8cc1-4f51-b9b6-543a177b945b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.166899 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.166929 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7247db65-8cc1-4f51-b9b6-543a177b945b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.166939 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgljf\" (UniqueName: \"kubernetes.io/projected/7247db65-8cc1-4f51-b9b6-543a177b945b-kube-api-access-xgljf\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.438737 4826 generic.go:334] "Generic (PLEG): container finished" podID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerID="8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878" exitCode=0 Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.438786 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerDied","Data":"8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878"} Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.438815 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7xkz" event={"ID":"7247db65-8cc1-4f51-b9b6-543a177b945b","Type":"ContainerDied","Data":"2eb6a03625fa9fd63fa022cdf5e29595b6fc14a79086a4cd74de9dfcebe00e15"} Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.438810 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7xkz" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.438832 4826 scope.go:117] "RemoveContainer" containerID="8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.468290 4826 scope.go:117] "RemoveContainer" containerID="bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.482862 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7xkz"] Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.496121 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7xkz"] Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.506027 4826 scope.go:117] "RemoveContainer" containerID="3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.534037 4826 scope.go:117] "RemoveContainer" containerID="8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878" Jan 31 08:42:01 crc kubenswrapper[4826]: E0131 08:42:01.535360 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878\": container with ID starting with 8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878 not found: ID does not exist" containerID="8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.535410 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878"} err="failed to get container status \"8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878\": rpc error: code = NotFound desc = could not find container \"8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878\": container with ID starting with 8ce05efd18b67c98ee61d9a894ec5b462dd3b1ad7978e6b90fb4a7ca37923878 not found: ID does not exist" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.535438 4826 scope.go:117] "RemoveContainer" containerID="bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0" Jan 31 08:42:01 crc kubenswrapper[4826]: E0131 08:42:01.536332 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0\": container with ID starting with bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0 not found: ID does not exist" containerID="bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.536399 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0"} err="failed to get container status \"bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0\": rpc error: code = NotFound desc = could not find container \"bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0\": container with ID starting with bbd6f759d14612b7a90fef99ace75f36237cf452f497f01be6f79960033fa4c0 not found: ID does not exist" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.536444 4826 scope.go:117] "RemoveContainer" containerID="3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0" Jan 31 08:42:01 crc kubenswrapper[4826]: E0131 08:42:01.536799 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0\": container with ID starting with 3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0 not found: ID does not exist" containerID="3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0" Jan 31 08:42:01 crc kubenswrapper[4826]: I0131 08:42:01.536834 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0"} err="failed to get container status \"3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0\": rpc error: code = NotFound desc = could not find container \"3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0\": container with ID starting with 3a1a39a807e1d0fbafb824958b51bb62e14f01e139304650ad5d6e43c22adac0 not found: ID does not exist" Jan 31 08:42:02 crc kubenswrapper[4826]: I0131 08:42:02.822322 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" path="/var/lib/kubelet/pods/7247db65-8cc1-4f51-b9b6-543a177b945b/volumes" Jan 31 08:42:05 crc kubenswrapper[4826]: I0131 08:42:05.809296 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:42:05 crc kubenswrapper[4826]: E0131 08:42:05.810147 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:42:19 crc kubenswrapper[4826]: I0131 08:42:19.808632 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:42:19 crc kubenswrapper[4826]: E0131 08:42:19.809383 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:42:30 crc kubenswrapper[4826]: I0131 08:42:30.810213 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:42:30 crc kubenswrapper[4826]: E0131 08:42:30.811181 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.828151 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fmstg"] Jan 31 08:42:32 crc kubenswrapper[4826]: E0131 08:42:32.829384 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="extract-utilities" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.829403 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="extract-utilities" Jan 31 08:42:32 crc kubenswrapper[4826]: E0131 08:42:32.829420 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="extract-content" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.829428 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="extract-content" Jan 31 08:42:32 crc kubenswrapper[4826]: E0131 08:42:32.829445 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="registry-server" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.829453 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="registry-server" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.829698 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7247db65-8cc1-4f51-b9b6-543a177b945b" containerName="registry-server" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.831590 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.833716 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmstg"] Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.879232 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlz96\" (UniqueName: \"kubernetes.io/projected/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-kube-api-access-vlz96\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.879294 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-catalog-content\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.879495 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-utilities\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.981736 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlz96\" (UniqueName: \"kubernetes.io/projected/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-kube-api-access-vlz96\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.981837 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-catalog-content\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.982386 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-catalog-content\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.982643 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-utilities\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:32 crc kubenswrapper[4826]: I0131 08:42:32.983082 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-utilities\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:33 crc kubenswrapper[4826]: I0131 08:42:33.016163 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlz96\" (UniqueName: \"kubernetes.io/projected/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-kube-api-access-vlz96\") pod \"community-operators-fmstg\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:33 crc kubenswrapper[4826]: I0131 08:42:33.161880 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:33 crc kubenswrapper[4826]: I0131 08:42:33.763613 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmstg"] Jan 31 08:42:34 crc kubenswrapper[4826]: I0131 08:42:34.729481 4826 generic.go:334] "Generic (PLEG): container finished" podID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerID="e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68" exitCode=0 Jan 31 08:42:34 crc kubenswrapper[4826]: I0131 08:42:34.729807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerDied","Data":"e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68"} Jan 31 08:42:34 crc kubenswrapper[4826]: I0131 08:42:34.729842 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerStarted","Data":"e5b8586ad0aff35933c68a2000bb61c8c75f475d917b000d7d196dcaa4a3c65a"} Jan 31 08:42:35 crc kubenswrapper[4826]: I0131 08:42:35.742910 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerStarted","Data":"ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718"} Jan 31 08:42:36 crc kubenswrapper[4826]: I0131 08:42:36.754203 4826 generic.go:334] "Generic (PLEG): container finished" podID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerID="ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718" exitCode=0 Jan 31 08:42:36 crc kubenswrapper[4826]: I0131 08:42:36.754297 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerDied","Data":"ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718"} Jan 31 08:42:37 crc kubenswrapper[4826]: I0131 08:42:37.765857 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerStarted","Data":"a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867"} Jan 31 08:42:37 crc kubenswrapper[4826]: I0131 08:42:37.790163 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fmstg" podStartSLOduration=3.203416663 podStartE2EDuration="5.790144827s" podCreationTimestamp="2026-01-31 08:42:32 +0000 UTC" firstStartedPulling="2026-01-31 08:42:34.731650443 +0000 UTC m=+3986.585536802" lastFinishedPulling="2026-01-31 08:42:37.318378607 +0000 UTC m=+3989.172264966" observedRunningTime="2026-01-31 08:42:37.783456076 +0000 UTC m=+3989.637342445" watchObservedRunningTime="2026-01-31 08:42:37.790144827 +0000 UTC m=+3989.644031186" Jan 31 08:42:40 crc kubenswrapper[4826]: I0131 08:42:40.791275 4826 generic.go:334] "Generic (PLEG): container finished" podID="c92b3fc7-df90-4a08-bb1b-aebb65e316b8" containerID="32527760756f623f7ef36cfb7cc2f3ce36205ca7732b15409555f9c8ecdb8afc" exitCode=1 Jan 31 08:42:40 crc kubenswrapper[4826]: I0131 08:42:40.791397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"c92b3fc7-df90-4a08-bb1b-aebb65e316b8","Type":"ContainerDied","Data":"32527760756f623f7ef36cfb7cc2f3ce36205ca7732b15409555f9c8ecdb8afc"} Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.202166 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.380723 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ssh-key\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.380804 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-workdir\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.380847 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvtxf\" (UniqueName: \"kubernetes.io/projected/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-kube-api-access-pvtxf\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.380895 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ca-certs\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.380943 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config-secret\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.380988 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-temporary\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.381138 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-config-data\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.381170 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.381232 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.381774 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ceph\") pod \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\" (UID: \"c92b3fc7-df90-4a08-bb1b-aebb65e316b8\") " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.383185 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.383589 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-config-data" (OuterVolumeSpecName: "config-data") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.392879 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.396155 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ceph" (OuterVolumeSpecName: "ceph") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.397898 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-kube-api-access-pvtxf" (OuterVolumeSpecName: "kube-api-access-pvtxf") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "kube-api-access-pvtxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.404997 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.413060 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.429323 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.439158 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.447665 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c92b3fc7-df90-4a08-bb1b-aebb65e316b8" (UID: "c92b3fc7-df90-4a08-bb1b-aebb65e316b8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485601 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485708 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485721 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485736 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485746 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485757 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485769 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvtxf\" (UniqueName: \"kubernetes.io/projected/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-kube-api-access-pvtxf\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485781 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485792 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.485803 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c92b3fc7-df90-4a08-bb1b-aebb65e316b8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.505152 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.586965 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.831874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"c92b3fc7-df90-4a08-bb1b-aebb65e316b8","Type":"ContainerDied","Data":"3cc48f734cdf1177a9f249ac43ca4fa5977d9597d5a105f1ee2b412158e2c7a7"} Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.832281 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc48f734cdf1177a9f249ac43ca4fa5977d9597d5a105f1ee2b412158e2c7a7" Jan 31 08:42:42 crc kubenswrapper[4826]: I0131 08:42:42.832140 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 31 08:42:43 crc kubenswrapper[4826]: I0131 08:42:43.162879 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:43 crc kubenswrapper[4826]: I0131 08:42:43.162929 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:43 crc kubenswrapper[4826]: I0131 08:42:43.208111 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:43 crc kubenswrapper[4826]: I0131 08:42:43.888088 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:43 crc kubenswrapper[4826]: I0131 08:42:43.934760 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmstg"] Jan 31 08:42:44 crc kubenswrapper[4826]: I0131 08:42:44.810082 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:42:44 crc kubenswrapper[4826]: E0131 08:42:44.810685 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:42:45 crc kubenswrapper[4826]: I0131 08:42:45.879550 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fmstg" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="registry-server" containerID="cri-o://a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867" gracePeriod=2 Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.365931 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.566189 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-utilities\") pod \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.566423 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-catalog-content\") pod \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.566711 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlz96\" (UniqueName: \"kubernetes.io/projected/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-kube-api-access-vlz96\") pod \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\" (UID: \"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d\") " Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.567924 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-utilities" (OuterVolumeSpecName: "utilities") pod "60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" (UID: "60c9a0cf-16a9-4bb0-8ceb-84e54687c17d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.583217 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-kube-api-access-vlz96" (OuterVolumeSpecName: "kube-api-access-vlz96") pod "60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" (UID: "60c9a0cf-16a9-4bb0-8ceb-84e54687c17d"). InnerVolumeSpecName "kube-api-access-vlz96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.627535 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" (UID: "60c9a0cf-16a9-4bb0-8ceb-84e54687c17d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.670422 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.670495 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.670514 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlz96\" (UniqueName: \"kubernetes.io/projected/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d-kube-api-access-vlz96\") on node \"crc\" DevicePath \"\"" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.891750 4826 generic.go:334] "Generic (PLEG): container finished" podID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerID="a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867" exitCode=0 Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.891827 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerDied","Data":"a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867"} Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.891842 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmstg" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.891934 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmstg" event={"ID":"60c9a0cf-16a9-4bb0-8ceb-84e54687c17d","Type":"ContainerDied","Data":"e5b8586ad0aff35933c68a2000bb61c8c75f475d917b000d7d196dcaa4a3c65a"} Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.892011 4826 scope.go:117] "RemoveContainer" containerID="a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.922700 4826 scope.go:117] "RemoveContainer" containerID="ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.928123 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmstg"] Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.939183 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fmstg"] Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.947616 4826 scope.go:117] "RemoveContainer" containerID="e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.994639 4826 scope.go:117] "RemoveContainer" containerID="a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867" Jan 31 08:42:46 crc kubenswrapper[4826]: E0131 08:42:46.995313 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867\": container with ID starting with a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867 not found: ID does not exist" containerID="a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.995357 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867"} err="failed to get container status \"a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867\": rpc error: code = NotFound desc = could not find container \"a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867\": container with ID starting with a4e6ad09cc79cbd8a2f6464dcbef65c2c33b70eb018cb02aa6167c5a0ea82867 not found: ID does not exist" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.995385 4826 scope.go:117] "RemoveContainer" containerID="ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718" Jan 31 08:42:46 crc kubenswrapper[4826]: E0131 08:42:46.995863 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718\": container with ID starting with ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718 not found: ID does not exist" containerID="ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.995895 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718"} err="failed to get container status \"ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718\": rpc error: code = NotFound desc = could not find container \"ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718\": container with ID starting with ca9be2a9afa22c51891b8f432a261bc88fcd358a6182ea18fbdc49de69999718 not found: ID does not exist" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.995914 4826 scope.go:117] "RemoveContainer" containerID="e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68" Jan 31 08:42:46 crc kubenswrapper[4826]: E0131 08:42:46.996443 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68\": container with ID starting with e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68 not found: ID does not exist" containerID="e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68" Jan 31 08:42:46 crc kubenswrapper[4826]: I0131 08:42:46.996486 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68"} err="failed to get container status \"e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68\": rpc error: code = NotFound desc = could not find container \"e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68\": container with ID starting with e067dedbb8dde2653be20b6692eff2a6d30a53e51d477df288593b754d72ab68 not found: ID does not exist" Jan 31 08:42:48 crc kubenswrapper[4826]: I0131 08:42:48.822055 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" path="/var/lib/kubelet/pods/60c9a0cf-16a9-4bb0-8ceb-84e54687c17d/volumes" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.853259 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 08:42:50 crc kubenswrapper[4826]: E0131 08:42:50.854129 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c92b3fc7-df90-4a08-bb1b-aebb65e316b8" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.854146 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c92b3fc7-df90-4a08-bb1b-aebb65e316b8" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:42:50 crc kubenswrapper[4826]: E0131 08:42:50.854172 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="extract-utilities" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.854179 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="extract-utilities" Jan 31 08:42:50 crc kubenswrapper[4826]: E0131 08:42:50.854193 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="registry-server" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.854201 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="registry-server" Jan 31 08:42:50 crc kubenswrapper[4826]: E0131 08:42:50.854230 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="extract-content" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.854237 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="extract-content" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.854471 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c9a0cf-16a9-4bb0-8ceb-84e54687c17d" containerName="registry-server" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.854505 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c92b3fc7-df90-4a08-bb1b-aebb65e316b8" containerName="tempest-tests-tempest-tests-runner" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.856945 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.859549 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v4wfk" Jan 31 08:42:50 crc kubenswrapper[4826]: I0131 08:42:50.863229 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.054678 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.054790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9tl8\" (UniqueName: \"kubernetes.io/projected/824c9507-e210-4a05-aa63-c07a42d71d3b-kube-api-access-q9tl8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.157297 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.157415 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9tl8\" (UniqueName: \"kubernetes.io/projected/824c9507-e210-4a05-aa63-c07a42d71d3b-kube-api-access-q9tl8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.158281 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.176109 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9tl8\" (UniqueName: \"kubernetes.io/projected/824c9507-e210-4a05-aa63-c07a42d71d3b-kube-api-access-q9tl8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.183545 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"824c9507-e210-4a05-aa63-c07a42d71d3b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.485300 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.782932 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 08:42:51 crc kubenswrapper[4826]: I0131 08:42:51.944411 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"824c9507-e210-4a05-aa63-c07a42d71d3b","Type":"ContainerStarted","Data":"540c7370a012e1df07eb25b0acf08c0751fc7f8875dcae26c99b3374796aa8ef"} Jan 31 08:42:52 crc kubenswrapper[4826]: I0131 08:42:52.955276 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"824c9507-e210-4a05-aa63-c07a42d71d3b","Type":"ContainerStarted","Data":"53c13e2bb10ad2e840697bc355abfef6a0769d2c4964e2c030a1043bcc86f1eb"} Jan 31 08:42:52 crc kubenswrapper[4826]: I0131 08:42:52.971389 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.125780902 podStartE2EDuration="2.971367479s" podCreationTimestamp="2026-01-31 08:42:50 +0000 UTC" firstStartedPulling="2026-01-31 08:42:51.785917303 +0000 UTC m=+4003.639803662" lastFinishedPulling="2026-01-31 08:42:52.63150388 +0000 UTC m=+4004.485390239" observedRunningTime="2026-01-31 08:42:52.96927896 +0000 UTC m=+4004.823165319" watchObservedRunningTime="2026-01-31 08:42:52.971367479 +0000 UTC m=+4004.825253838" Jan 31 08:42:55 crc kubenswrapper[4826]: I0131 08:42:55.809169 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:42:55 crc kubenswrapper[4826]: E0131 08:42:55.810476 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.970844 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.973467 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.976648 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.976913 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.977060 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.977162 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.987038 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Jan 31 08:43:08 crc kubenswrapper[4826]: I0131 08:43:08.993367 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.065888 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.066237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.066318 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.168882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169075 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169147 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169291 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169385 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169433 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw6xx\" (UniqueName: \"kubernetes.io/projected/30581517-aaf5-4565-a879-47605e56918c-kube-api-access-fw6xx\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169610 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.169668 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.170321 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.170361 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.170405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.170429 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.170995 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.191052 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.271902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272188 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272224 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw6xx\" (UniqueName: \"kubernetes.io/projected/30581517-aaf5-4565-a879-47605e56918c-kube-api-access-fw6xx\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272262 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272300 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272333 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272396 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272473 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272904 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.272995 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.273574 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.273792 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.275896 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.278041 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.278167 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.281902 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.305076 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw6xx\" (UniqueName: \"kubernetes.io/projected/30581517-aaf5-4565-a879-47605e56918c-kube-api-access-fw6xx\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.308928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.608273 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:43:09 crc kubenswrapper[4826]: I0131 08:43:09.809247 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:43:10 crc kubenswrapper[4826]: I0131 08:43:10.124102 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"96d06b90822cc52e7495a180d9316b34bd33c2965646848de03879b26d4192b0"} Jan 31 08:43:10 crc kubenswrapper[4826]: I0131 08:43:10.185540 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Jan 31 08:43:10 crc kubenswrapper[4826]: W0131 08:43:10.198462 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30581517_aaf5_4565_a879_47605e56918c.slice/crio-fc8d346dcc955129cc20e862f0a4b15a6434543401a007d5779153aa9293a3fb WatchSource:0}: Error finding container fc8d346dcc955129cc20e862f0a4b15a6434543401a007d5779153aa9293a3fb: Status 404 returned error can't find the container with id fc8d346dcc955129cc20e862f0a4b15a6434543401a007d5779153aa9293a3fb Jan 31 08:43:11 crc kubenswrapper[4826]: I0131 08:43:11.153927 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"30581517-aaf5-4565-a879-47605e56918c","Type":"ContainerStarted","Data":"fc8d346dcc955129cc20e862f0a4b15a6434543401a007d5779153aa9293a3fb"} Jan 31 08:43:26 crc kubenswrapper[4826]: I0131 08:43:26.321404 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"30581517-aaf5-4565-a879-47605e56918c","Type":"ContainerStarted","Data":"64dca17ae06604d357edc94e3db17ec8fb5aedd8b64986148ccc1c82ec30e24d"} Jan 31 08:43:26 crc kubenswrapper[4826]: I0131 08:43:26.344800 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=4.018814004 podStartE2EDuration="19.344778023s" podCreationTimestamp="2026-01-31 08:43:07 +0000 UTC" firstStartedPulling="2026-01-31 08:43:10.201219065 +0000 UTC m=+4022.055105424" lastFinishedPulling="2026-01-31 08:43:25.527183054 +0000 UTC m=+4037.381069443" observedRunningTime="2026-01-31 08:43:26.340939963 +0000 UTC m=+4038.194826322" watchObservedRunningTime="2026-01-31 08:43:26.344778023 +0000 UTC m=+4038.198664382" Jan 31 08:44:38 crc kubenswrapper[4826]: I0131 08:44:38.004388 4826 generic.go:334] "Generic (PLEG): container finished" podID="30581517-aaf5-4565-a879-47605e56918c" containerID="64dca17ae06604d357edc94e3db17ec8fb5aedd8b64986148ccc1c82ec30e24d" exitCode=0 Jan 31 08:44:38 crc kubenswrapper[4826]: I0131 08:44:38.004535 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"30581517-aaf5-4565-a879-47605e56918c","Type":"ContainerDied","Data":"64dca17ae06604d357edc94e3db17ec8fb5aedd8b64986148ccc1c82ec30e24d"} Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.430786 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.506535 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Jan 31 08:44:39 crc kubenswrapper[4826]: E0131 08:44:39.507945 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30581517-aaf5-4565-a879-47605e56918c" containerName="tobiko-tests-tobiko" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.507998 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="30581517-aaf5-4565-a879-47605e56918c" containerName="tobiko-tests-tobiko" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.508829 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="30581517-aaf5-4565-a879-47605e56918c" containerName="tobiko-tests-tobiko" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.516383 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.524747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.629403 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ca-certs\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.629817 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-openstack-config-secret\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.629842 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-test-operator-clouds-config\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.629919 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-kubeconfig\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630001 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630019 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-private-key\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630085 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-workdir\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630129 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw6xx\" (UniqueName: \"kubernetes.io/projected/30581517-aaf5-4565-a879-47605e56918c-kube-api-access-fw6xx\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630147 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-temporary\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630176 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ceph\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630219 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-public-key\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630234 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-config\") pod \"30581517-aaf5-4565-a879-47605e56918c\" (UID: \"30581517-aaf5-4565-a879-47605e56918c\") " Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630464 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630530 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630563 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndjhg\" (UniqueName: \"kubernetes.io/projected/c571d756-ee78-4c86-9c51-2b5565fcc40e-kube-api-access-ndjhg\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630660 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630687 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630724 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630750 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630777 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630796 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630844 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.630874 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.632900 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.637487 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.638134 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ceph" (OuterVolumeSpecName: "ceph") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.643253 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30581517-aaf5-4565-a879-47605e56918c-kube-api-access-fw6xx" (OuterVolumeSpecName: "kube-api-access-fw6xx") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "kube-api-access-fw6xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.661499 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.661571 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.666463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.667881 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.681043 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.683603 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.705945 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.732987 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733069 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndjhg\" (UniqueName: \"kubernetes.io/projected/c571d756-ee78-4c86-9c51-2b5565fcc40e-kube-api-access-ndjhg\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733111 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733164 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733200 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733275 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733313 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733515 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733588 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733619 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733746 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw6xx\" (UniqueName: \"kubernetes.io/projected/30581517-aaf5-4565-a879-47605e56918c-kube-api-access-fw6xx\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733763 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733776 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733784 4826 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733794 4826 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733803 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733813 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733821 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733830 4826 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/30581517-aaf5-4565-a879-47605e56918c-kubeconfig\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.733839 4826 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/30581517-aaf5-4565-a879-47605e56918c-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.734242 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.734589 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.735483 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.736364 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.736556 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.736871 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.738867 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.738907 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.739094 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.741194 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.751682 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndjhg\" (UniqueName: \"kubernetes.io/projected/c571d756-ee78-4c86-9c51-2b5565fcc40e-kube-api-access-ndjhg\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.776717 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:39 crc kubenswrapper[4826]: I0131 08:44:39.847816 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:44:40 crc kubenswrapper[4826]: I0131 08:44:40.047580 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"30581517-aaf5-4565-a879-47605e56918c","Type":"ContainerDied","Data":"fc8d346dcc955129cc20e862f0a4b15a6434543401a007d5779153aa9293a3fb"} Jan 31 08:44:40 crc kubenswrapper[4826]: I0131 08:44:40.048031 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc8d346dcc955129cc20e862f0a4b15a6434543401a007d5779153aa9293a3fb" Jan 31 08:44:40 crc kubenswrapper[4826]: I0131 08:44:40.048129 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Jan 31 08:44:40 crc kubenswrapper[4826]: I0131 08:44:40.425720 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Jan 31 08:44:41 crc kubenswrapper[4826]: I0131 08:44:41.033318 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "30581517-aaf5-4565-a879-47605e56918c" (UID: "30581517-aaf5-4565-a879-47605e56918c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:44:41 crc kubenswrapper[4826]: I0131 08:44:41.056874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"c571d756-ee78-4c86-9c51-2b5565fcc40e","Type":"ContainerStarted","Data":"6b4244d23ff4db7f7064e3393f25d77823f4f7124ced80d5ef371bfc946b4f09"} Jan 31 08:44:41 crc kubenswrapper[4826]: I0131 08:44:41.070608 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/30581517-aaf5-4565-a879-47605e56918c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:44:42 crc kubenswrapper[4826]: I0131 08:44:42.067789 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"c571d756-ee78-4c86-9c51-2b5565fcc40e","Type":"ContainerStarted","Data":"9765c74620f218d7b292b7afea5bcf3756c17aaf044767bbdc9145e7b04893a1"} Jan 31 08:44:42 crc kubenswrapper[4826]: I0131 08:44:42.092469 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=3.092448611 podStartE2EDuration="3.092448611s" podCreationTimestamp="2026-01-31 08:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:44:42.086562353 +0000 UTC m=+4113.940448712" watchObservedRunningTime="2026-01-31 08:44:42.092448611 +0000 UTC m=+4113.946334990" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.209051 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg"] Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.211401 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.219312 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.225862 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.230650 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg"] Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.300162 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-config-volume\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.300213 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qs2\" (UniqueName: \"kubernetes.io/projected/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-kube-api-access-h9qs2\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.300314 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-secret-volume\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.402409 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-config-volume\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.402460 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9qs2\" (UniqueName: \"kubernetes.io/projected/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-kube-api-access-h9qs2\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.402585 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-secret-volume\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.403333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-config-volume\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.407686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-secret-volume\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.422881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9qs2\" (UniqueName: \"kubernetes.io/projected/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-kube-api-access-h9qs2\") pod \"collect-profiles-29497485-jmtzg\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.533408 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:00 crc kubenswrapper[4826]: I0131 08:45:00.966887 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg"] Jan 31 08:45:00 crc kubenswrapper[4826]: W0131 08:45:00.974168 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99aa10bb_69c8_4d50_88d8_2ab03d1a6f76.slice/crio-cfa4877530f4bf95e25068a6c8cdaf015a0b5248e12d8daff91ab537d578ce00 WatchSource:0}: Error finding container cfa4877530f4bf95e25068a6c8cdaf015a0b5248e12d8daff91ab537d578ce00: Status 404 returned error can't find the container with id cfa4877530f4bf95e25068a6c8cdaf015a0b5248e12d8daff91ab537d578ce00 Jan 31 08:45:01 crc kubenswrapper[4826]: I0131 08:45:01.253803 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" event={"ID":"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76","Type":"ContainerStarted","Data":"65ab2af6ab462e12e581df580b8aed3e56cb923c6a80b9412fd60e527c85204f"} Jan 31 08:45:01 crc kubenswrapper[4826]: I0131 08:45:01.254200 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" event={"ID":"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76","Type":"ContainerStarted","Data":"cfa4877530f4bf95e25068a6c8cdaf015a0b5248e12d8daff91ab537d578ce00"} Jan 31 08:45:01 crc kubenswrapper[4826]: I0131 08:45:01.268537 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" podStartSLOduration=1.268516032 podStartE2EDuration="1.268516032s" podCreationTimestamp="2026-01-31 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 08:45:01.267586896 +0000 UTC m=+4133.121473255" watchObservedRunningTime="2026-01-31 08:45:01.268516032 +0000 UTC m=+4133.122402491" Jan 31 08:45:02 crc kubenswrapper[4826]: I0131 08:45:02.263315 4826 generic.go:334] "Generic (PLEG): container finished" podID="99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" containerID="65ab2af6ab462e12e581df580b8aed3e56cb923c6a80b9412fd60e527c85204f" exitCode=0 Jan 31 08:45:02 crc kubenswrapper[4826]: I0131 08:45:02.263366 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" event={"ID":"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76","Type":"ContainerDied","Data":"65ab2af6ab462e12e581df580b8aed3e56cb923c6a80b9412fd60e527c85204f"} Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.640254 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.768580 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-secret-volume\") pod \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.768641 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qs2\" (UniqueName: \"kubernetes.io/projected/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-kube-api-access-h9qs2\") pod \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.768672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-config-volume\") pod \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\" (UID: \"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76\") " Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.769681 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-config-volume" (OuterVolumeSpecName: "config-volume") pod "99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" (UID: "99aa10bb-69c8-4d50-88d8-2ab03d1a6f76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.774905 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" (UID: "99aa10bb-69c8-4d50-88d8-2ab03d1a6f76"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.777424 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-kube-api-access-h9qs2" (OuterVolumeSpecName: "kube-api-access-h9qs2") pod "99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" (UID: "99aa10bb-69c8-4d50-88d8-2ab03d1a6f76"). InnerVolumeSpecName "kube-api-access-h9qs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.871486 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.871529 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9qs2\" (UniqueName: \"kubernetes.io/projected/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-kube-api-access-h9qs2\") on node \"crc\" DevicePath \"\"" Jan 31 08:45:03 crc kubenswrapper[4826]: I0131 08:45:03.871544 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99aa10bb-69c8-4d50-88d8-2ab03d1a6f76-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 08:45:04 crc kubenswrapper[4826]: I0131 08:45:04.283309 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" event={"ID":"99aa10bb-69c8-4d50-88d8-2ab03d1a6f76","Type":"ContainerDied","Data":"cfa4877530f4bf95e25068a6c8cdaf015a0b5248e12d8daff91ab537d578ce00"} Jan 31 08:45:04 crc kubenswrapper[4826]: I0131 08:45:04.283356 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa4877530f4bf95e25068a6c8cdaf015a0b5248e12d8daff91ab537d578ce00" Jan 31 08:45:04 crc kubenswrapper[4826]: I0131 08:45:04.283413 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497485-jmtzg" Jan 31 08:45:04 crc kubenswrapper[4826]: I0131 08:45:04.343520 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg"] Jan 31 08:45:04 crc kubenswrapper[4826]: I0131 08:45:04.351065 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497440-74slg"] Jan 31 08:45:04 crc kubenswrapper[4826]: I0131 08:45:04.818896 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450cfa0c-8bd8-4400-8c22-044409770c26" path="/var/lib/kubelet/pods/450cfa0c-8bd8-4400-8c22-044409770c26/volumes" Jan 31 08:45:27 crc kubenswrapper[4826]: I0131 08:45:27.377856 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:45:27 crc kubenswrapper[4826]: I0131 08:45:27.378715 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:45:34 crc kubenswrapper[4826]: I0131 08:45:34.301361 4826 scope.go:117] "RemoveContainer" containerID="fd4c514394b6cab65fe5ed14d8932a539929896ea5ad0499552d24475af459ca" Jan 31 08:45:57 crc kubenswrapper[4826]: I0131 08:45:57.377503 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:45:57 crc kubenswrapper[4826]: I0131 08:45:57.378203 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:46:13 crc kubenswrapper[4826]: I0131 08:46:13.951354 4826 generic.go:334] "Generic (PLEG): container finished" podID="c571d756-ee78-4c86-9c51-2b5565fcc40e" containerID="9765c74620f218d7b292b7afea5bcf3756c17aaf044767bbdc9145e7b04893a1" exitCode=0 Jan 31 08:46:13 crc kubenswrapper[4826]: I0131 08:46:13.951450 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"c571d756-ee78-4c86-9c51-2b5565fcc40e","Type":"ContainerDied","Data":"9765c74620f218d7b292b7afea5bcf3756c17aaf044767bbdc9145e7b04893a1"} Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.441703 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.526376 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-public-key\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.526687 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ceph\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.526861 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ca-certs\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.527447 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndjhg\" (UniqueName: \"kubernetes.io/projected/c571d756-ee78-4c86-9c51-2b5565fcc40e-kube-api-access-ndjhg\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.527843 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-temporary\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.527984 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-private-key\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528073 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528286 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-config\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528385 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-workdir\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528590 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-clouds-config\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528685 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-openstack-config-secret\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528749 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-kubeconfig\") pod \"c571d756-ee78-4c86-9c51-2b5565fcc40e\" (UID: \"c571d756-ee78-4c86-9c51-2b5565fcc40e\") " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.528820 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.529530 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.545070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c571d756-ee78-4c86-9c51-2b5565fcc40e-kube-api-access-ndjhg" (OuterVolumeSpecName: "kube-api-access-ndjhg") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "kube-api-access-ndjhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.545097 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ceph" (OuterVolumeSpecName: "ceph") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.545071 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.565880 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.565912 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.570444 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.572705 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.581462 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.592239 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.594980 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.636954 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637095 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637176 4826 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-kubeconfig\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637237 4826 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637451 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637539 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c571d756-ee78-4c86-9c51-2b5565fcc40e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637612 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndjhg\" (UniqueName: \"kubernetes.io/projected/c571d756-ee78-4c86-9c51-2b5565fcc40e-kube-api-access-ndjhg\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637672 4826 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637753 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.637826 4826 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c571d756-ee78-4c86-9c51-2b5565fcc40e-tobiko-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.658909 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.742493 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.986304 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"c571d756-ee78-4c86-9c51-2b5565fcc40e","Type":"ContainerDied","Data":"6b4244d23ff4db7f7064e3393f25d77823f4f7124ced80d5ef371bfc946b4f09"} Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.986357 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4244d23ff4db7f7064e3393f25d77823f4f7124ced80d5ef371bfc946b4f09" Jan 31 08:46:15 crc kubenswrapper[4826]: I0131 08:46:15.986475 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Jan 31 08:46:16 crc kubenswrapper[4826]: I0131 08:46:16.856102 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c571d756-ee78-4c86-9c51-2b5565fcc40e" (UID: "c571d756-ee78-4c86-9c51-2b5565fcc40e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:46:16 crc kubenswrapper[4826]: I0131 08:46:16.865772 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c571d756-ee78-4c86-9c51-2b5565fcc40e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.887140 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Jan 31 08:46:23 crc kubenswrapper[4826]: E0131 08:46:23.888350 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c571d756-ee78-4c86-9c51-2b5565fcc40e" containerName="tobiko-tests-tobiko" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.888372 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c571d756-ee78-4c86-9c51-2b5565fcc40e" containerName="tobiko-tests-tobiko" Jan 31 08:46:23 crc kubenswrapper[4826]: E0131 08:46:23.888418 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" containerName="collect-profiles" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.888428 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" containerName="collect-profiles" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.888608 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="99aa10bb-69c8-4d50-88d8-2ab03d1a6f76" containerName="collect-profiles" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.888629 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c571d756-ee78-4c86-9c51-2b5565fcc40e" containerName="tobiko-tests-tobiko" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.889436 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:23 crc kubenswrapper[4826]: I0131 08:46:23.897370 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.025570 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcd4z\" (UniqueName: \"kubernetes.io/projected/cfa22288-274c-4eac-9718-643e63bd02d4-kube-api-access-rcd4z\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.026237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.128030 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.128090 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcd4z\" (UniqueName: \"kubernetes.io/projected/cfa22288-274c-4eac-9718-643e63bd02d4-kube-api-access-rcd4z\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.128644 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.150729 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcd4z\" (UniqueName: \"kubernetes.io/projected/cfa22288-274c-4eac-9718-643e63bd02d4-kube-api-access-rcd4z\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.154019 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"cfa22288-274c-4eac-9718-643e63bd02d4\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.217995 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.669834 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Jan 31 08:46:24 crc kubenswrapper[4826]: I0131 08:46:24.681394 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:46:25 crc kubenswrapper[4826]: I0131 08:46:25.074561 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"cfa22288-274c-4eac-9718-643e63bd02d4","Type":"ContainerStarted","Data":"f6a59a3c05b135c5c326b7d25ed9b5c5e318ea55e34eae3282de8aea884a24d9"} Jan 31 08:46:26 crc kubenswrapper[4826]: I0131 08:46:26.086854 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"cfa22288-274c-4eac-9718-643e63bd02d4","Type":"ContainerStarted","Data":"a21a606fdbed5fe8277b2ce5396105ce28ee6e3b8ca581b3d5093a993ab9b12e"} Jan 31 08:46:26 crc kubenswrapper[4826]: I0131 08:46:26.113127 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=2.572240736 podStartE2EDuration="3.113103225s" podCreationTimestamp="2026-01-31 08:46:23 +0000 UTC" firstStartedPulling="2026-01-31 08:46:24.680276638 +0000 UTC m=+4216.534163027" lastFinishedPulling="2026-01-31 08:46:25.221139157 +0000 UTC m=+4217.075025516" observedRunningTime="2026-01-31 08:46:26.105840668 +0000 UTC m=+4217.959727027" watchObservedRunningTime="2026-01-31 08:46:26.113103225 +0000 UTC m=+4217.966989584" Jan 31 08:46:27 crc kubenswrapper[4826]: I0131 08:46:27.376589 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:46:27 crc kubenswrapper[4826]: I0131 08:46:27.377007 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:46:27 crc kubenswrapper[4826]: I0131 08:46:27.377064 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:46:27 crc kubenswrapper[4826]: I0131 08:46:27.377718 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96d06b90822cc52e7495a180d9316b34bd33c2965646848de03879b26d4192b0"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:46:27 crc kubenswrapper[4826]: I0131 08:46:27.377776 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://96d06b90822cc52e7495a180d9316b34bd33c2965646848de03879b26d4192b0" gracePeriod=600 Jan 31 08:46:28 crc kubenswrapper[4826]: I0131 08:46:28.112893 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="96d06b90822cc52e7495a180d9316b34bd33c2965646848de03879b26d4192b0" exitCode=0 Jan 31 08:46:28 crc kubenswrapper[4826]: I0131 08:46:28.112996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"96d06b90822cc52e7495a180d9316b34bd33c2965646848de03879b26d4192b0"} Jan 31 08:46:28 crc kubenswrapper[4826]: I0131 08:46:28.113272 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162"} Jan 31 08:46:28 crc kubenswrapper[4826]: I0131 08:46:28.113301 4826 scope.go:117] "RemoveContainer" containerID="e50c5a21de69947e5e4158ecd9eb1c18ce3f33a5d5d37aba1dbb26c3efbfdb2e" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.000702 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ansibletest-ansibletest"] Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.029212 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.036056 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.036837 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.039075 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.141237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.141674 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.141802 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.142093 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.142305 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.142589 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.142721 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ceph\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.142897 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.143376 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwpwv\" (UniqueName: \"kubernetes.io/projected/e047424f-d695-4ef5-b24f-0150fd27964c-kube-api-access-kwpwv\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.143564 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.245597 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.245895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ceph\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246022 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246110 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwpwv\" (UniqueName: \"kubernetes.io/projected/e047424f-d695-4ef5-b24f-0150fd27964c-kube-api-access-kwpwv\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246358 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246458 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246763 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246855 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246961 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.246531 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.247384 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.247982 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.248509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.256493 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.256508 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.256696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ceph\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.257435 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.258333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.265240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwpwv\" (UniqueName: \"kubernetes.io/projected/e047424f-d695-4ef5-b24f-0150fd27964c-kube-api-access-kwpwv\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.275603 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ansibletest-ansibletest\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.354292 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:46:38 crc kubenswrapper[4826]: I0131 08:46:38.806162 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Jan 31 08:46:39 crc kubenswrapper[4826]: I0131 08:46:39.218242 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"e047424f-d695-4ef5-b24f-0150fd27964c","Type":"ContainerStarted","Data":"0f0e341093e64fbc090692ead20f728d2700e27dd30d70c1374da56047140cd3"} Jan 31 08:46:55 crc kubenswrapper[4826]: E0131 08:46:55.087115 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified" Jan 31 08:46:55 crc kubenswrapper[4826]: E0131 08:46:55.087972 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 08:46:55 crc kubenswrapper[4826]: container &Container{Name:ansibletest-ansibletest,Image:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_ANSIBLE_EXTRA_VARS,Value:-e manual_run=false,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_FILE_EXTRA_VARS,Value:--- Jan 31 08:46:55 crc kubenswrapper[4826]: foo: bar Jan 31 08:46:55 crc kubenswrapper[4826]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_BRANCH,Value:,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_REPO,Value:https://github.com/ansible/test-playbooks,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_INVENTORY,Value:localhost ansible_connection=local ansible_python_interpreter=python3 Jan 31 08:46:55 crc kubenswrapper[4826]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_PLAYBOOK,Value:./debug.yml,ValueFrom:nil,},EnvVar{Name:POD_DEBUG,Value:false,ValueFrom:nil,},EnvVar{Name:POD_INSTALL_COLLECTIONS,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/ansible,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/AnsibleTests/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/ansible/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/var/lib/ansible/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:compute-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/.ssh/compute_id,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:workload-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/test_keypair.key,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwpwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*227,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*227,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ansibletest-ansibletest_openstack(e047424f-d695-4ef5-b24f-0150fd27964c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 31 08:46:55 crc kubenswrapper[4826]: > logger="UnhandledError" Jan 31 08:46:55 crc kubenswrapper[4826]: E0131 08:46:55.089996 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ansibletest-ansibletest" podUID="e047424f-d695-4ef5-b24f-0150fd27964c" Jan 31 08:46:55 crc kubenswrapper[4826]: E0131 08:46:55.394340 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified\\\"\"" pod="openstack/ansibletest-ansibletest" podUID="e047424f-d695-4ef5-b24f-0150fd27964c" Jan 31 08:47:10 crc kubenswrapper[4826]: I0131 08:47:10.550377 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"e047424f-d695-4ef5-b24f-0150fd27964c","Type":"ContainerStarted","Data":"a4c89a31ba34aa5314e8d145fe5e94af25688f623e14648ddd42a5f88257e887"} Jan 31 08:47:10 crc kubenswrapper[4826]: I0131 08:47:10.577925 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ansibletest-ansibletest" podStartSLOduration=3.760956683 podStartE2EDuration="34.577900062s" podCreationTimestamp="2026-01-31 08:46:36 +0000 UTC" firstStartedPulling="2026-01-31 08:46:38.833387678 +0000 UTC m=+4230.687274037" lastFinishedPulling="2026-01-31 08:47:09.650331057 +0000 UTC m=+4261.504217416" observedRunningTime="2026-01-31 08:47:10.570942123 +0000 UTC m=+4262.424828492" watchObservedRunningTime="2026-01-31 08:47:10.577900062 +0000 UTC m=+4262.431786431" Jan 31 08:47:13 crc kubenswrapper[4826]: I0131 08:47:13.574494 4826 generic.go:334] "Generic (PLEG): container finished" podID="e047424f-d695-4ef5-b24f-0150fd27964c" containerID="a4c89a31ba34aa5314e8d145fe5e94af25688f623e14648ddd42a5f88257e887" exitCode=0 Jan 31 08:47:13 crc kubenswrapper[4826]: I0131 08:47:13.574621 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"e047424f-d695-4ef5-b24f-0150fd27964c","Type":"ContainerDied","Data":"a4c89a31ba34aa5314e8d145fe5e94af25688f623e14648ddd42a5f88257e887"} Jan 31 08:47:14 crc kubenswrapper[4826]: I0131 08:47:14.946544 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.044815 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-workdir\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.044898 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-temporary\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.044955 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.044998 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-compute-ssh-secret\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045017 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-workload-ssh-secret\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045061 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ca-certs\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045095 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ceph\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045196 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwpwv\" (UniqueName: \"kubernetes.io/projected/e047424f-d695-4ef5-b24f-0150fd27964c-kube-api-access-kwpwv\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045265 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config-secret\") pod \"e047424f-d695-4ef5-b24f-0150fd27964c\" (UID: \"e047424f-d695-4ef5-b24f-0150fd27964c\") " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.045534 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.046158 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.053694 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ceph" (OuterVolumeSpecName: "ceph") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.057426 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.063178 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.064364 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e047424f-d695-4ef5-b24f-0150fd27964c-kube-api-access-kwpwv" (OuterVolumeSpecName: "kube-api-access-kwpwv") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "kube-api-access-kwpwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.085345 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-compute-ssh-secret" (OuterVolumeSpecName: "compute-ssh-secret") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "compute-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.087908 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-workload-ssh-secret" (OuterVolumeSpecName: "workload-ssh-secret") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "workload-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.089308 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.122170 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.127497 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e047424f-d695-4ef5-b24f-0150fd27964c" (UID: "e047424f-d695-4ef5-b24f-0150fd27964c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148637 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwpwv\" (UniqueName: \"kubernetes.io/projected/e047424f-d695-4ef5-b24f-0150fd27964c-kube-api-access-kwpwv\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148695 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148714 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e047424f-d695-4ef5-b24f-0150fd27964c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148780 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148804 4826 reconciler_common.go:293] "Volume detached for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-workload-ssh-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148819 4826 reconciler_common.go:293] "Volume detached for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-compute-ssh-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148834 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148846 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e047424f-d695-4ef5-b24f-0150fd27964c-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.148857 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e047424f-d695-4ef5-b24f-0150fd27964c-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.186523 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.250659 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.594168 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"e047424f-d695-4ef5-b24f-0150fd27964c","Type":"ContainerDied","Data":"0f0e341093e64fbc090692ead20f728d2700e27dd30d70c1374da56047140cd3"} Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.594224 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0e341093e64fbc090692ead20f728d2700e27dd30d70c1374da56047140cd3" Jan 31 08:47:15 crc kubenswrapper[4826]: I0131 08:47:15.594258 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.572833 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Jan 31 08:47:22 crc kubenswrapper[4826]: E0131 08:47:22.574152 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e047424f-d695-4ef5-b24f-0150fd27964c" containerName="ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.574169 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e047424f-d695-4ef5-b24f-0150fd27964c" containerName="ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.574361 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e047424f-d695-4ef5-b24f-0150fd27964c" containerName="ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.575053 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.586560 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.700399 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gdlz\" (UniqueName: \"kubernetes.io/projected/3e35a6d5-6dc6-45e3-bdee-86dda89b6910-kube-api-access-9gdlz\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.700487 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.803116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gdlz\" (UniqueName: \"kubernetes.io/projected/3e35a6d5-6dc6-45e3-bdee-86dda89b6910-kube-api-access-9gdlz\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.803512 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.804069 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.858191 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gdlz\" (UniqueName: \"kubernetes.io/projected/3e35a6d5-6dc6-45e3-bdee-86dda89b6910-kube-api-access-9gdlz\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.874580 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3e35a6d5-6dc6-45e3-bdee-86dda89b6910\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:22 crc kubenswrapper[4826]: I0131 08:47:22.932512 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Jan 31 08:47:23 crc kubenswrapper[4826]: I0131 08:47:23.457610 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Jan 31 08:47:23 crc kubenswrapper[4826]: I0131 08:47:23.679643 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"3e35a6d5-6dc6-45e3-bdee-86dda89b6910","Type":"ContainerStarted","Data":"4be1585ee5fb7921f67213712d84b5c65a8c18597cb21e8f784e8087d0f3abd6"} Jan 31 08:47:24 crc kubenswrapper[4826]: I0131 08:47:24.690597 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"3e35a6d5-6dc6-45e3-bdee-86dda89b6910","Type":"ContainerStarted","Data":"1954367951d069622bef137a6209a416065b3363f05d6e01311aebd2665a5462"} Jan 31 08:47:24 crc kubenswrapper[4826]: I0131 08:47:24.707264 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" podStartSLOduration=2.181192678 podStartE2EDuration="2.707244515s" podCreationTimestamp="2026-01-31 08:47:22 +0000 UTC" firstStartedPulling="2026-01-31 08:47:23.464451494 +0000 UTC m=+4275.318337863" lastFinishedPulling="2026-01-31 08:47:23.990503341 +0000 UTC m=+4275.844389700" observedRunningTime="2026-01-31 08:47:24.703106257 +0000 UTC m=+4276.556992616" watchObservedRunningTime="2026-01-31 08:47:24.707244515 +0000 UTC m=+4276.561130874" Jan 31 08:47:34 crc kubenswrapper[4826]: I0131 08:47:34.389516 4826 scope.go:117] "RemoveContainer" containerID="af9a56278f2438c4a229400e41e341c2b493f4c7907e07a4d7fd329a0d30babf" Jan 31 08:47:34 crc kubenswrapper[4826]: I0131 08:47:34.416209 4826 scope.go:117] "RemoveContainer" containerID="61eba656fff936bb69e6a7b524fbd0321a3f7724743c038f34d2f4be3b950726" Jan 31 08:47:34 crc kubenswrapper[4826]: I0131 08:47:34.459665 4826 scope.go:117] "RemoveContainer" containerID="8a71ea4567f878da43bef7c2407425103ca09d2c23d1b59be8fcf228e3f0481d" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.082057 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizontest-tests-horizontest"] Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.083729 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.085615 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.085814 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizontest-tests-horizontesthorizontest-config" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.097225 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.138542 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.138990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.139069 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.139108 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.139299 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.139423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.139617 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.139741 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npghs\" (UniqueName: \"kubernetes.io/projected/d18b10f2-cbcf-4b97-825a-30be7171be8f-kube-api-access-npghs\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.242011 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.242072 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.242110 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npghs\" (UniqueName: \"kubernetes.io/projected/d18b10f2-cbcf-4b97-825a-30be7171be8f-kube-api-access-npghs\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.242159 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.242698 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.242698 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.243061 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.243101 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.243160 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.243369 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.243703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.244528 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.248872 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.249106 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.249658 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.263762 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npghs\" (UniqueName: \"kubernetes.io/projected/d18b10f2-cbcf-4b97-825a-30be7171be8f-kube-api-access-npghs\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.274042 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"horizontest-tests-horizontest\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.412573 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:47:37 crc kubenswrapper[4826]: I0131 08:47:37.934978 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Jan 31 08:47:38 crc kubenswrapper[4826]: I0131 08:47:38.821748 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"d18b10f2-cbcf-4b97-825a-30be7171be8f","Type":"ContainerStarted","Data":"e9b7c85023c60f8251aa0013a0ec2f076f90fc6cc7152655c0a031306cca32ab"} Jan 31 08:47:57 crc kubenswrapper[4826]: E0131 08:47:57.829609 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizontest:current-podified" Jan 31 08:47:57 crc kubenswrapper[4826]: E0131 08:47:57.830341 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizontest-tests-horizontest,Image:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADMIN_PASSWORD,Value:12345678,ValueFrom:nil,},EnvVar{Name:ADMIN_USERNAME,Value:admin,ValueFrom:nil,},EnvVar{Name:AUTH_URL,Value:https://keystone-public-openstack.apps-crc.testing,ValueFrom:nil,},EnvVar{Name:DASHBOARD_URL,Value:https://horizon-openstack.apps-crc.testing/,ValueFrom:nil,},EnvVar{Name:EXTRA_FLAG,Value:not pagination and test_users.py,ValueFrom:nil,},EnvVar{Name:FLAVOR_NAME,Value:m1.tiny,ValueFrom:nil,},EnvVar{Name:HORIZONTEST_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:HORIZON_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:HORIZON_LOGS_DIR_NAME,Value:horizon,ValueFrom:nil,},EnvVar{Name:HORIZON_REPO_BRANCH,Value:master,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE,Value:/var/lib/horizontest/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE_NAME,Value:cirros-0.6.2-x86_64-disk,ValueFrom:nil,},EnvVar{Name:IMAGE_URL,Value:http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:PASSWORD,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME_XPATH,Value://*[@class=\"context-project\"]//ancestor::ul,ValueFrom:nil,},EnvVar{Name:REPO_URL,Value:https://review.opendev.org/openstack/horizon,ValueFrom:nil,},EnvVar{Name:USER_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/horizontest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/horizontest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/horizontest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npghs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42455,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42455,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizontest-tests-horizontest_openstack(d18b10f2-cbcf-4b97-825a-30be7171be8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 08:47:57 crc kubenswrapper[4826]: E0131 08:47:57.831747 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizontest-tests-horizontest" podUID="d18b10f2-cbcf-4b97-825a-30be7171be8f" Jan 31 08:47:58 crc kubenswrapper[4826]: E0131 08:47:58.000920 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizontest:current-podified\\\"\"" pod="openstack/horizontest-tests-horizontest" podUID="d18b10f2-cbcf-4b97-825a-30be7171be8f" Jan 31 08:48:14 crc kubenswrapper[4826]: I0131 08:48:14.198000 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"d18b10f2-cbcf-4b97-825a-30be7171be8f","Type":"ContainerStarted","Data":"63828f351c462f60aca6273856d1edca9a5684e2da34aa3fa777cddecb8b14bc"} Jan 31 08:48:14 crc kubenswrapper[4826]: I0131 08:48:14.234872 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizontest-tests-horizontest" podStartSLOduration=4.553144335 podStartE2EDuration="38.234844536s" podCreationTimestamp="2026-01-31 08:47:36 +0000 UTC" firstStartedPulling="2026-01-31 08:47:38.659580802 +0000 UTC m=+4290.513467171" lastFinishedPulling="2026-01-31 08:48:12.341281013 +0000 UTC m=+4324.195167372" observedRunningTime="2026-01-31 08:48:14.227939339 +0000 UTC m=+4326.081825708" watchObservedRunningTime="2026-01-31 08:48:14.234844536 +0000 UTC m=+4326.088730895" Jan 31 08:48:27 crc kubenswrapper[4826]: I0131 08:48:27.377482 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:48:27 crc kubenswrapper[4826]: I0131 08:48:27.378214 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:48:57 crc kubenswrapper[4826]: I0131 08:48:57.377600 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:48:57 crc kubenswrapper[4826]: I0131 08:48:57.379062 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.376631 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.377210 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.377256 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.377958 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.378025 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" gracePeriod=600 Jan 31 08:49:27 crc kubenswrapper[4826]: E0131 08:49:27.903598 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.945514 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" exitCode=0 Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.945555 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162"} Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.945588 4826 scope.go:117] "RemoveContainer" containerID="96d06b90822cc52e7495a180d9316b34bd33c2965646848de03879b26d4192b0" Jan 31 08:49:27 crc kubenswrapper[4826]: I0131 08:49:27.946597 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:49:27 crc kubenswrapper[4826]: E0131 08:49:27.947022 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:49:40 crc kubenswrapper[4826]: I0131 08:49:40.812164 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:49:40 crc kubenswrapper[4826]: E0131 08:49:40.813012 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.100984 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbvpw"] Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.106434 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.130539 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbvpw"] Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.202982 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbv45\" (UniqueName: \"kubernetes.io/projected/03fde839-67ec-4a7d-abe4-31faea8ab68d-kube-api-access-wbv45\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.203121 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-catalog-content\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.203646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-utilities\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.306173 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-catalog-content\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.306397 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-utilities\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.306453 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbv45\" (UniqueName: \"kubernetes.io/projected/03fde839-67ec-4a7d-abe4-31faea8ab68d-kube-api-access-wbv45\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.307337 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-catalog-content\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.307368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-utilities\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:45 crc kubenswrapper[4826]: I0131 08:49:45.748795 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbv45\" (UniqueName: \"kubernetes.io/projected/03fde839-67ec-4a7d-abe4-31faea8ab68d-kube-api-access-wbv45\") pod \"certified-operators-xbvpw\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:46 crc kubenswrapper[4826]: I0131 08:49:46.031150 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:46 crc kubenswrapper[4826]: I0131 08:49:46.499897 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbvpw"] Jan 31 08:49:46 crc kubenswrapper[4826]: W0131 08:49:46.502723 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03fde839_67ec_4a7d_abe4_31faea8ab68d.slice/crio-72fab929c36e14c81af6deab7b48cee67dd8b8aa738e606155240ccedf8abf21 WatchSource:0}: Error finding container 72fab929c36e14c81af6deab7b48cee67dd8b8aa738e606155240ccedf8abf21: Status 404 returned error can't find the container with id 72fab929c36e14c81af6deab7b48cee67dd8b8aa738e606155240ccedf8abf21 Jan 31 08:49:47 crc kubenswrapper[4826]: I0131 08:49:47.122805 4826 generic.go:334] "Generic (PLEG): container finished" podID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerID="9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30" exitCode=0 Jan 31 08:49:47 crc kubenswrapper[4826]: I0131 08:49:47.123138 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerDied","Data":"9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30"} Jan 31 08:49:47 crc kubenswrapper[4826]: I0131 08:49:47.123175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerStarted","Data":"72fab929c36e14c81af6deab7b48cee67dd8b8aa738e606155240ccedf8abf21"} Jan 31 08:49:48 crc kubenswrapper[4826]: I0131 08:49:48.134600 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerStarted","Data":"d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0"} Jan 31 08:49:50 crc kubenswrapper[4826]: I0131 08:49:50.153919 4826 generic.go:334] "Generic (PLEG): container finished" podID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerID="d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0" exitCode=0 Jan 31 08:49:50 crc kubenswrapper[4826]: I0131 08:49:50.154493 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerDied","Data":"d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0"} Jan 31 08:49:52 crc kubenswrapper[4826]: I0131 08:49:52.177920 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerStarted","Data":"c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f"} Jan 31 08:49:52 crc kubenswrapper[4826]: I0131 08:49:52.203609 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbvpw" podStartSLOduration=3.13501233 podStartE2EDuration="7.203587311s" podCreationTimestamp="2026-01-31 08:49:45 +0000 UTC" firstStartedPulling="2026-01-31 08:49:47.124957824 +0000 UTC m=+4418.978844183" lastFinishedPulling="2026-01-31 08:49:51.193532795 +0000 UTC m=+4423.047419164" observedRunningTime="2026-01-31 08:49:52.197982722 +0000 UTC m=+4424.051869111" watchObservedRunningTime="2026-01-31 08:49:52.203587311 +0000 UTC m=+4424.057473670" Jan 31 08:49:52 crc kubenswrapper[4826]: I0131 08:49:52.809858 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:49:52 crc kubenswrapper[4826]: E0131 08:49:52.810255 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:49:56 crc kubenswrapper[4826]: I0131 08:49:56.032273 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:56 crc kubenswrapper[4826]: I0131 08:49:56.033094 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:56 crc kubenswrapper[4826]: I0131 08:49:56.105507 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:56 crc kubenswrapper[4826]: I0131 08:49:56.255808 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:56 crc kubenswrapper[4826]: I0131 08:49:56.345414 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xbvpw"] Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.227850 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xbvpw" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="registry-server" containerID="cri-o://c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f" gracePeriod=2 Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.738929 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.900066 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbv45\" (UniqueName: \"kubernetes.io/projected/03fde839-67ec-4a7d-abe4-31faea8ab68d-kube-api-access-wbv45\") pod \"03fde839-67ec-4a7d-abe4-31faea8ab68d\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.900428 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-catalog-content\") pod \"03fde839-67ec-4a7d-abe4-31faea8ab68d\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.900544 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-utilities\") pod \"03fde839-67ec-4a7d-abe4-31faea8ab68d\" (UID: \"03fde839-67ec-4a7d-abe4-31faea8ab68d\") " Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.901310 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-utilities" (OuterVolumeSpecName: "utilities") pod "03fde839-67ec-4a7d-abe4-31faea8ab68d" (UID: "03fde839-67ec-4a7d-abe4-31faea8ab68d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.908144 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fde839-67ec-4a7d-abe4-31faea8ab68d-kube-api-access-wbv45" (OuterVolumeSpecName: "kube-api-access-wbv45") pod "03fde839-67ec-4a7d-abe4-31faea8ab68d" (UID: "03fde839-67ec-4a7d-abe4-31faea8ab68d"). InnerVolumeSpecName "kube-api-access-wbv45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:49:58 crc kubenswrapper[4826]: I0131 08:49:58.962872 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03fde839-67ec-4a7d-abe4-31faea8ab68d" (UID: "03fde839-67ec-4a7d-abe4-31faea8ab68d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.003073 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.003110 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03fde839-67ec-4a7d-abe4-31faea8ab68d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.003119 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbv45\" (UniqueName: \"kubernetes.io/projected/03fde839-67ec-4a7d-abe4-31faea8ab68d-kube-api-access-wbv45\") on node \"crc\" DevicePath \"\"" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.238052 4826 generic.go:334] "Generic (PLEG): container finished" podID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerID="c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f" exitCode=0 Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.238123 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerDied","Data":"c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f"} Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.238216 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbvpw" event={"ID":"03fde839-67ec-4a7d-abe4-31faea8ab68d","Type":"ContainerDied","Data":"72fab929c36e14c81af6deab7b48cee67dd8b8aa738e606155240ccedf8abf21"} Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.238237 4826 scope.go:117] "RemoveContainer" containerID="c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.238175 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbvpw" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.274602 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xbvpw"] Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.275964 4826 scope.go:117] "RemoveContainer" containerID="d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.293335 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xbvpw"] Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.298706 4826 scope.go:117] "RemoveContainer" containerID="9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.360031 4826 scope.go:117] "RemoveContainer" containerID="c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f" Jan 31 08:49:59 crc kubenswrapper[4826]: E0131 08:49:59.361210 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f\": container with ID starting with c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f not found: ID does not exist" containerID="c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.361275 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f"} err="failed to get container status \"c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f\": rpc error: code = NotFound desc = could not find container \"c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f\": container with ID starting with c587ed6511bd8757722df8157c831c48372f464646877dc4355131796d55cd5f not found: ID does not exist" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.361311 4826 scope.go:117] "RemoveContainer" containerID="d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0" Jan 31 08:49:59 crc kubenswrapper[4826]: E0131 08:49:59.361812 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0\": container with ID starting with d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0 not found: ID does not exist" containerID="d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.361864 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0"} err="failed to get container status \"d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0\": rpc error: code = NotFound desc = could not find container \"d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0\": container with ID starting with d56dafa63e6728a76c4c19e21af62d543e2fa25b9187b3fa8b2074a8d836aea0 not found: ID does not exist" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.361901 4826 scope.go:117] "RemoveContainer" containerID="9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30" Jan 31 08:49:59 crc kubenswrapper[4826]: E0131 08:49:59.362506 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30\": container with ID starting with 9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30 not found: ID does not exist" containerID="9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30" Jan 31 08:49:59 crc kubenswrapper[4826]: I0131 08:49:59.362544 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30"} err="failed to get container status \"9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30\": rpc error: code = NotFound desc = could not find container \"9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30\": container with ID starting with 9699b5988b28544c95408f109cc71852b296965edc67499480ed92d90a439e30 not found: ID does not exist" Jan 31 08:50:00 crc kubenswrapper[4826]: I0131 08:50:00.823337 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" path="/var/lib/kubelet/pods/03fde839-67ec-4a7d-abe4-31faea8ab68d/volumes" Jan 31 08:50:07 crc kubenswrapper[4826]: I0131 08:50:07.809349 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:50:07 crc kubenswrapper[4826]: E0131 08:50:07.810482 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:50:13 crc kubenswrapper[4826]: I0131 08:50:13.361992 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"d18b10f2-cbcf-4b97-825a-30be7171be8f","Type":"ContainerDied","Data":"63828f351c462f60aca6273856d1edca9a5684e2da34aa3fa777cddecb8b14bc"} Jan 31 08:50:13 crc kubenswrapper[4826]: I0131 08:50:13.361959 4826 generic.go:334] "Generic (PLEG): container finished" podID="d18b10f2-cbcf-4b97-825a-30be7171be8f" containerID="63828f351c462f60aca6273856d1edca9a5684e2da34aa3fa777cddecb8b14bc" exitCode=0 Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.697297 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837264 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ceph\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837395 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-temporary\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837475 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-workdir\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837522 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ca-certs\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837682 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-clouds-config\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837743 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-openstack-config-secret\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837776 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.837834 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npghs\" (UniqueName: \"kubernetes.io/projected/d18b10f2-cbcf-4b97-825a-30be7171be8f-kube-api-access-npghs\") pod \"d18b10f2-cbcf-4b97-825a-30be7171be8f\" (UID: \"d18b10f2-cbcf-4b97-825a-30be7171be8f\") " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.838025 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.838382 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.844108 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ceph" (OuterVolumeSpecName: "ceph") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.847499 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.854316 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18b10f2-cbcf-4b97-825a-30be7171be8f-kube-api-access-npghs" (OuterVolumeSpecName: "kube-api-access-npghs") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "kube-api-access-npghs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.872383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.907904 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.908956 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.941443 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.941573 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npghs\" (UniqueName: \"kubernetes.io/projected/d18b10f2-cbcf-4b97-825a-30be7171be8f-kube-api-access-npghs\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.941634 4826 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.941694 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.941749 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.941810 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d18b10f2-cbcf-4b97-825a-30be7171be8f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:14 crc kubenswrapper[4826]: I0131 08:50:14.969523 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 08:50:15 crc kubenswrapper[4826]: I0131 08:50:15.043621 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:15 crc kubenswrapper[4826]: I0131 08:50:15.068815 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d18b10f2-cbcf-4b97-825a-30be7171be8f" (UID: "d18b10f2-cbcf-4b97-825a-30be7171be8f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:50:15 crc kubenswrapper[4826]: I0131 08:50:15.145452 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d18b10f2-cbcf-4b97-825a-30be7171be8f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 08:50:15 crc kubenswrapper[4826]: I0131 08:50:15.379625 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"d18b10f2-cbcf-4b97-825a-30be7171be8f","Type":"ContainerDied","Data":"e9b7c85023c60f8251aa0013a0ec2f076f90fc6cc7152655c0a031306cca32ab"} Jan 31 08:50:15 crc kubenswrapper[4826]: I0131 08:50:15.379664 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b7c85023c60f8251aa0013a0ec2f076f90fc6cc7152655c0a031306cca32ab" Jan 31 08:50:15 crc kubenswrapper[4826]: I0131 08:50:15.379683 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Jan 31 08:50:20 crc kubenswrapper[4826]: I0131 08:50:20.808660 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:50:20 crc kubenswrapper[4826]: E0131 08:50:20.809838 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.403607 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Jan 31 08:50:23 crc kubenswrapper[4826]: E0131 08:50:23.404493 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="extract-content" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.404508 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="extract-content" Jan 31 08:50:23 crc kubenswrapper[4826]: E0131 08:50:23.404521 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="extract-utilities" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.404527 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="extract-utilities" Jan 31 08:50:23 crc kubenswrapper[4826]: E0131 08:50:23.404570 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="registry-server" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.404577 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="registry-server" Jan 31 08:50:23 crc kubenswrapper[4826]: E0131 08:50:23.404587 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b10f2-cbcf-4b97-825a-30be7171be8f" containerName="horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.404593 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b10f2-cbcf-4b97-825a-30be7171be8f" containerName="horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.404766 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18b10f2-cbcf-4b97-825a-30be7171be8f" containerName="horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.404778 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fde839-67ec-4a7d-abe4-31faea8ab68d" containerName="registry-server" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.405492 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.413416 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.507166 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7kb\" (UniqueName: \"kubernetes.io/projected/3268897c-fd9e-4ee6-8ec2-2d721c0796c6-kube-api-access-nl7kb\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.507517 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.609157 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7kb\" (UniqueName: \"kubernetes.io/projected/3268897c-fd9e-4ee6-8ec2-2d721c0796c6-kube-api-access-nl7kb\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.610448 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.610905 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.629428 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7kb\" (UniqueName: \"kubernetes.io/projected/3268897c-fd9e-4ee6-8ec2-2d721c0796c6-kube-api-access-nl7kb\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.635757 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"3268897c-fd9e-4ee6-8ec2-2d721c0796c6\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: I0131 08:50:23.736743 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Jan 31 08:50:23 crc kubenswrapper[4826]: E0131 08:50:23.737511 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:50:24 crc kubenswrapper[4826]: I0131 08:50:24.193328 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Jan 31 08:50:24 crc kubenswrapper[4826]: E0131 08:50:24.195763 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:50:24 crc kubenswrapper[4826]: I0131 08:50:24.458587 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"3268897c-fd9e-4ee6-8ec2-2d721c0796c6","Type":"ContainerStarted","Data":"6b35a448727bd9b09feb5b576998d09ebd47d7e4386d26580d7c1f4d04813f7a"} Jan 31 08:50:24 crc kubenswrapper[4826]: E0131 08:50:24.615716 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:50:25 crc kubenswrapper[4826]: I0131 08:50:25.468273 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"3268897c-fd9e-4ee6-8ec2-2d721c0796c6","Type":"ContainerStarted","Data":"46d6871d57ddf7d75132ef3ad8bd8656531a536584da1c9384120cc3dbe9b9fd"} Jan 31 08:50:25 crc kubenswrapper[4826]: E0131 08:50:25.469187 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:50:25 crc kubenswrapper[4826]: I0131 08:50:25.482905 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" podStartSLOduration=2.06477342 podStartE2EDuration="2.48288151s" podCreationTimestamp="2026-01-31 08:50:23 +0000 UTC" firstStartedPulling="2026-01-31 08:50:24.197548606 +0000 UTC m=+4456.051434965" lastFinishedPulling="2026-01-31 08:50:24.615656696 +0000 UTC m=+4456.469543055" observedRunningTime="2026-01-31 08:50:25.480931785 +0000 UTC m=+4457.334818144" watchObservedRunningTime="2026-01-31 08:50:25.48288151 +0000 UTC m=+4457.336767879" Jan 31 08:50:26 crc kubenswrapper[4826]: E0131 08:50:26.478906 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:50:34 crc kubenswrapper[4826]: I0131 08:50:34.810397 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:50:34 crc kubenswrapper[4826]: E0131 08:50:34.811480 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:50:49 crc kubenswrapper[4826]: I0131 08:50:49.809459 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:50:49 crc kubenswrapper[4826]: E0131 08:50:49.810474 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:51:00 crc kubenswrapper[4826]: I0131 08:51:00.809217 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:51:00 crc kubenswrapper[4826]: E0131 08:51:00.810241 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.558366 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vxtgz/must-gather-jdrpb"] Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.560739 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.564282 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vxtgz"/"openshift-service-ca.crt" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.564432 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vxtgz"/"kube-root-ca.crt" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.564702 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vxtgz"/"default-dockercfg-czxhj" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.584609 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vxtgz/must-gather-jdrpb"] Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.651359 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5fzk\" (UniqueName: \"kubernetes.io/projected/dedbf67b-784c-4411-b680-87adeee09404-kube-api-access-k5fzk\") pod \"must-gather-jdrpb\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.651532 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dedbf67b-784c-4411-b680-87adeee09404-must-gather-output\") pod \"must-gather-jdrpb\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.754121 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dedbf67b-784c-4411-b680-87adeee09404-must-gather-output\") pod \"must-gather-jdrpb\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.754293 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fzk\" (UniqueName: \"kubernetes.io/projected/dedbf67b-784c-4411-b680-87adeee09404-kube-api-access-k5fzk\") pod \"must-gather-jdrpb\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.754686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dedbf67b-784c-4411-b680-87adeee09404-must-gather-output\") pod \"must-gather-jdrpb\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.774416 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fzk\" (UniqueName: \"kubernetes.io/projected/dedbf67b-784c-4411-b680-87adeee09404-kube-api-access-k5fzk\") pod \"must-gather-jdrpb\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:06 crc kubenswrapper[4826]: I0131 08:51:06.886572 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:51:07 crc kubenswrapper[4826]: I0131 08:51:07.379865 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vxtgz/must-gather-jdrpb"] Jan 31 08:51:07 crc kubenswrapper[4826]: I0131 08:51:07.895579 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" event={"ID":"dedbf67b-784c-4411-b680-87adeee09404","Type":"ContainerStarted","Data":"648766ac12eb71f07712aee788ac1f1d3216f9c6ea87cc1625cc8704a585a248"} Jan 31 08:51:11 crc kubenswrapper[4826]: I0131 08:51:11.809566 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:51:11 crc kubenswrapper[4826]: E0131 08:51:11.810452 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:51:13 crc kubenswrapper[4826]: I0131 08:51:13.972501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" event={"ID":"dedbf67b-784c-4411-b680-87adeee09404","Type":"ContainerStarted","Data":"9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4"} Jan 31 08:51:13 crc kubenswrapper[4826]: I0131 08:51:13.973201 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" event={"ID":"dedbf67b-784c-4411-b680-87adeee09404","Type":"ContainerStarted","Data":"1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb"} Jan 31 08:51:13 crc kubenswrapper[4826]: I0131 08:51:13.993866 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" podStartSLOduration=3.030884983 podStartE2EDuration="7.993849224s" podCreationTimestamp="2026-01-31 08:51:06 +0000 UTC" firstStartedPulling="2026-01-31 08:51:07.384362381 +0000 UTC m=+4499.238248730" lastFinishedPulling="2026-01-31 08:51:12.347326612 +0000 UTC m=+4504.201212971" observedRunningTime="2026-01-31 08:51:13.993143603 +0000 UTC m=+4505.847029982" watchObservedRunningTime="2026-01-31 08:51:13.993849224 +0000 UTC m=+4505.847735593" Jan 31 08:51:17 crc kubenswrapper[4826]: I0131 08:51:17.900597 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vxtgz/crc-debug-z2xnh"] Jan 31 08:51:17 crc kubenswrapper[4826]: I0131 08:51:17.902789 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.009693 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-host\") pod \"crc-debug-z2xnh\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.010471 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdd2s\" (UniqueName: \"kubernetes.io/projected/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-kube-api-access-tdd2s\") pod \"crc-debug-z2xnh\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.112210 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-host\") pod \"crc-debug-z2xnh\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.112349 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdd2s\" (UniqueName: \"kubernetes.io/projected/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-kube-api-access-tdd2s\") pod \"crc-debug-z2xnh\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.112373 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-host\") pod \"crc-debug-z2xnh\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.132030 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdd2s\" (UniqueName: \"kubernetes.io/projected/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-kube-api-access-tdd2s\") pod \"crc-debug-z2xnh\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: I0131 08:51:18.223650 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:18 crc kubenswrapper[4826]: W0131 08:51:18.276094 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb748b39_c294_41d9_8dbf_fcad3f0a0b67.slice/crio-963742c98e87cb49c52523a79a8cb8a583832062d8ad4211a1cfa7f44f64014b WatchSource:0}: Error finding container 963742c98e87cb49c52523a79a8cb8a583832062d8ad4211a1cfa7f44f64014b: Status 404 returned error can't find the container with id 963742c98e87cb49c52523a79a8cb8a583832062d8ad4211a1cfa7f44f64014b Jan 31 08:51:19 crc kubenswrapper[4826]: I0131 08:51:19.010723 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" event={"ID":"fb748b39-c294-41d9-8dbf-fcad3f0a0b67","Type":"ContainerStarted","Data":"963742c98e87cb49c52523a79a8cb8a583832062d8ad4211a1cfa7f44f64014b"} Jan 31 08:51:22 crc kubenswrapper[4826]: I0131 08:51:22.809476 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:51:22 crc kubenswrapper[4826]: E0131 08:51:22.810169 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:51:31 crc kubenswrapper[4826]: I0131 08:51:31.121425 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" event={"ID":"fb748b39-c294-41d9-8dbf-fcad3f0a0b67","Type":"ContainerStarted","Data":"58db2343c658b6e0e9b8a3d3b6b64a6e50b70fc136c46e7c71a67d9c133ddc9d"} Jan 31 08:51:31 crc kubenswrapper[4826]: I0131 08:51:31.137589 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" podStartSLOduration=2.104395599 podStartE2EDuration="14.137573685s" podCreationTimestamp="2026-01-31 08:51:17 +0000 UTC" firstStartedPulling="2026-01-31 08:51:18.279067639 +0000 UTC m=+4510.132953998" lastFinishedPulling="2026-01-31 08:51:30.312245725 +0000 UTC m=+4522.166132084" observedRunningTime="2026-01-31 08:51:31.135068873 +0000 UTC m=+4522.988955232" watchObservedRunningTime="2026-01-31 08:51:31.137573685 +0000 UTC m=+4522.991460044" Jan 31 08:51:33 crc kubenswrapper[4826]: I0131 08:51:33.808812 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:51:33 crc kubenswrapper[4826]: E0131 08:51:33.809624 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.686717 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7gwz7"] Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.689016 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.705755 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7gwz7"] Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.830397 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-utilities\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.830471 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-catalog-content\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.830530 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l955z\" (UniqueName: \"kubernetes.io/projected/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-kube-api-access-l955z\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.932444 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-utilities\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.932555 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-catalog-content\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.932617 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l955z\" (UniqueName: \"kubernetes.io/projected/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-kube-api-access-l955z\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.933254 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-utilities\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.933478 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-catalog-content\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:37 crc kubenswrapper[4826]: I0131 08:51:37.953656 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l955z\" (UniqueName: \"kubernetes.io/projected/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-kube-api-access-l955z\") pod \"redhat-operators-7gwz7\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:38 crc kubenswrapper[4826]: I0131 08:51:38.008366 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:42 crc kubenswrapper[4826]: I0131 08:51:42.663721 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7gwz7"] Jan 31 08:51:43 crc kubenswrapper[4826]: I0131 08:51:43.225813 4826 generic.go:334] "Generic (PLEG): container finished" podID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerID="a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd" exitCode=0 Jan 31 08:51:43 crc kubenswrapper[4826]: I0131 08:51:43.225911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerDied","Data":"a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd"} Jan 31 08:51:43 crc kubenswrapper[4826]: I0131 08:51:43.226104 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerStarted","Data":"19ff62d51a783d592786c810cc85f3ecc534d90d7a46698b992330e136c037f3"} Jan 31 08:51:43 crc kubenswrapper[4826]: I0131 08:51:43.228730 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 08:51:45 crc kubenswrapper[4826]: I0131 08:51:45.246182 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerStarted","Data":"943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9"} Jan 31 08:51:45 crc kubenswrapper[4826]: E0131 08:51:45.689604 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bbc2f52_3d9c_4aae_9d2f_63e055ab1c45.slice/crio-943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9.scope\": RecentStats: unable to find data in memory cache]" Jan 31 08:51:46 crc kubenswrapper[4826]: I0131 08:51:46.810218 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:51:46 crc kubenswrapper[4826]: E0131 08:51:46.811116 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:51:47 crc kubenswrapper[4826]: I0131 08:51:47.263429 4826 generic.go:334] "Generic (PLEG): container finished" podID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerID="943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9" exitCode=0 Jan 31 08:51:47 crc kubenswrapper[4826]: I0131 08:51:47.263475 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerDied","Data":"943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9"} Jan 31 08:51:50 crc kubenswrapper[4826]: I0131 08:51:50.288019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerStarted","Data":"0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2"} Jan 31 08:51:50 crc kubenswrapper[4826]: I0131 08:51:50.312667 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7gwz7" podStartSLOduration=8.737502521 podStartE2EDuration="13.312641702s" podCreationTimestamp="2026-01-31 08:51:37 +0000 UTC" firstStartedPulling="2026-01-31 08:51:43.228498904 +0000 UTC m=+4535.082385263" lastFinishedPulling="2026-01-31 08:51:47.803638085 +0000 UTC m=+4539.657524444" observedRunningTime="2026-01-31 08:51:50.307619897 +0000 UTC m=+4542.161506266" watchObservedRunningTime="2026-01-31 08:51:50.312641702 +0000 UTC m=+4542.166528061" Jan 31 08:51:52 crc kubenswrapper[4826]: I0131 08:51:52.305909 4826 generic.go:334] "Generic (PLEG): container finished" podID="fb748b39-c294-41d9-8dbf-fcad3f0a0b67" containerID="58db2343c658b6e0e9b8a3d3b6b64a6e50b70fc136c46e7c71a67d9c133ddc9d" exitCode=0 Jan 31 08:51:52 crc kubenswrapper[4826]: I0131 08:51:52.306003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" event={"ID":"fb748b39-c294-41d9-8dbf-fcad3f0a0b67","Type":"ContainerDied","Data":"58db2343c658b6e0e9b8a3d3b6b64a6e50b70fc136c46e7c71a67d9c133ddc9d"} Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.412387 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.438128 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-host\") pod \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.438214 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdd2s\" (UniqueName: \"kubernetes.io/projected/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-kube-api-access-tdd2s\") pod \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\" (UID: \"fb748b39-c294-41d9-8dbf-fcad3f0a0b67\") " Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.438313 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-host" (OuterVolumeSpecName: "host") pod "fb748b39-c294-41d9-8dbf-fcad3f0a0b67" (UID: "fb748b39-c294-41d9-8dbf-fcad3f0a0b67"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.438799 4826 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-host\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.445145 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-kube-api-access-tdd2s" (OuterVolumeSpecName: "kube-api-access-tdd2s") pod "fb748b39-c294-41d9-8dbf-fcad3f0a0b67" (UID: "fb748b39-c294-41d9-8dbf-fcad3f0a0b67"). InnerVolumeSpecName "kube-api-access-tdd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.467684 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vxtgz/crc-debug-z2xnh"] Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.480852 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vxtgz/crc-debug-z2xnh"] Jan 31 08:51:53 crc kubenswrapper[4826]: I0131 08:51:53.541181 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdd2s\" (UniqueName: \"kubernetes.io/projected/fb748b39-c294-41d9-8dbf-fcad3f0a0b67-kube-api-access-tdd2s\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.323681 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963742c98e87cb49c52523a79a8cb8a583832062d8ad4211a1cfa7f44f64014b" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.323757 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-z2xnh" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.644145 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vxtgz/crc-debug-ztd9g"] Jan 31 08:51:54 crc kubenswrapper[4826]: E0131 08:51:54.644610 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb748b39-c294-41d9-8dbf-fcad3f0a0b67" containerName="container-00" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.644624 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb748b39-c294-41d9-8dbf-fcad3f0a0b67" containerName="container-00" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.644861 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb748b39-c294-41d9-8dbf-fcad3f0a0b67" containerName="container-00" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.645472 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.662129 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbsl\" (UniqueName: \"kubernetes.io/projected/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-kube-api-access-zcbsl\") pod \"crc-debug-ztd9g\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.662699 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-host\") pod \"crc-debug-ztd9g\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.765902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-host\") pod \"crc-debug-ztd9g\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.766595 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbsl\" (UniqueName: \"kubernetes.io/projected/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-kube-api-access-zcbsl\") pod \"crc-debug-ztd9g\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.766167 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-host\") pod \"crc-debug-ztd9g\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: E0131 08:51:54.809357 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.820381 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb748b39-c294-41d9-8dbf-fcad3f0a0b67" path="/var/lib/kubelet/pods/fb748b39-c294-41d9-8dbf-fcad3f0a0b67/volumes" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.848951 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbsl\" (UniqueName: \"kubernetes.io/projected/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-kube-api-access-zcbsl\") pod \"crc-debug-ztd9g\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:54 crc kubenswrapper[4826]: I0131 08:51:54.965827 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:55 crc kubenswrapper[4826]: I0131 08:51:55.332840 4826 generic.go:334] "Generic (PLEG): container finished" podID="b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" containerID="1d06afea75cb6567a44667cb7e037ed54212a476e1e9baba52dceb214bf7d3ab" exitCode=1 Jan 31 08:51:55 crc kubenswrapper[4826]: I0131 08:51:55.332897 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" event={"ID":"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9","Type":"ContainerDied","Data":"1d06afea75cb6567a44667cb7e037ed54212a476e1e9baba52dceb214bf7d3ab"} Jan 31 08:51:55 crc kubenswrapper[4826]: I0131 08:51:55.333443 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" event={"ID":"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9","Type":"ContainerStarted","Data":"dde3087add5c62f2661637e2d8ca06c79c77abbae88e038c74eea2381becab74"} Jan 31 08:51:55 crc kubenswrapper[4826]: I0131 08:51:55.367606 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vxtgz/crc-debug-ztd9g"] Jan 31 08:51:55 crc kubenswrapper[4826]: I0131 08:51:55.381719 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vxtgz/crc-debug-ztd9g"] Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.463680 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.514902 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcbsl\" (UniqueName: \"kubernetes.io/projected/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-kube-api-access-zcbsl\") pod \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.515124 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-host\") pod \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\" (UID: \"b70b57cc-90d4-4430-bdd5-8cbc4e6574b9\") " Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.515277 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-host" (OuterVolumeSpecName: "host") pod "b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" (UID: "b70b57cc-90d4-4430-bdd5-8cbc4e6574b9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.515765 4826 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-host\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.523302 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-kube-api-access-zcbsl" (OuterVolumeSpecName: "kube-api-access-zcbsl") pod "b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" (UID: "b70b57cc-90d4-4430-bdd5-8cbc4e6574b9"). InnerVolumeSpecName "kube-api-access-zcbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.617703 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcbsl\" (UniqueName: \"kubernetes.io/projected/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9-kube-api-access-zcbsl\") on node \"crc\" DevicePath \"\"" Jan 31 08:51:56 crc kubenswrapper[4826]: I0131 08:51:56.819378 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" path="/var/lib/kubelet/pods/b70b57cc-90d4-4430-bdd5-8cbc4e6574b9/volumes" Jan 31 08:51:57 crc kubenswrapper[4826]: I0131 08:51:57.351187 4826 scope.go:117] "RemoveContainer" containerID="1d06afea75cb6567a44667cb7e037ed54212a476e1e9baba52dceb214bf7d3ab" Jan 31 08:51:57 crc kubenswrapper[4826]: I0131 08:51:57.351232 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/crc-debug-ztd9g" Jan 31 08:51:58 crc kubenswrapper[4826]: I0131 08:51:58.009100 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:58 crc kubenswrapper[4826]: I0131 08:51:58.010095 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:58 crc kubenswrapper[4826]: I0131 08:51:58.063232 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:58 crc kubenswrapper[4826]: I0131 08:51:58.405949 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:51:58 crc kubenswrapper[4826]: I0131 08:51:58.460124 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7gwz7"] Jan 31 08:51:58 crc kubenswrapper[4826]: I0131 08:51:58.815278 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:51:58 crc kubenswrapper[4826]: E0131 08:51:58.815533 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.375601 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7gwz7" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="registry-server" containerID="cri-o://0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2" gracePeriod=2 Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.872193 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.915655 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-catalog-content\") pod \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.917206 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l955z\" (UniqueName: \"kubernetes.io/projected/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-kube-api-access-l955z\") pod \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.917327 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-utilities\") pod \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\" (UID: \"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45\") " Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.920609 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-utilities" (OuterVolumeSpecName: "utilities") pod "8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" (UID: "8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:52:00 crc kubenswrapper[4826]: I0131 08:52:00.926937 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-kube-api-access-l955z" (OuterVolumeSpecName: "kube-api-access-l955z") pod "8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" (UID: "8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45"). InnerVolumeSpecName "kube-api-access-l955z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.020415 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l955z\" (UniqueName: \"kubernetes.io/projected/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-kube-api-access-l955z\") on node \"crc\" DevicePath \"\"" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.020451 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.080654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" (UID: "8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.122523 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.387310 4826 generic.go:334] "Generic (PLEG): container finished" podID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerID="0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2" exitCode=0 Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.387352 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerDied","Data":"0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2"} Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.387379 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gwz7" event={"ID":"8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45","Type":"ContainerDied","Data":"19ff62d51a783d592786c810cc85f3ecc534d90d7a46698b992330e136c037f3"} Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.387395 4826 scope.go:117] "RemoveContainer" containerID="0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.387522 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gwz7" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.410699 4826 scope.go:117] "RemoveContainer" containerID="943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.428253 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7gwz7"] Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.439992 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7gwz7"] Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.441102 4826 scope.go:117] "RemoveContainer" containerID="a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.486618 4826 scope.go:117] "RemoveContainer" containerID="0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2" Jan 31 08:52:01 crc kubenswrapper[4826]: E0131 08:52:01.487112 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2\": container with ID starting with 0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2 not found: ID does not exist" containerID="0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.487156 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2"} err="failed to get container status \"0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2\": rpc error: code = NotFound desc = could not find container \"0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2\": container with ID starting with 0f20efb8900c95e0e65a959167f5ff62ba1138ac61b3f850fcf120e9edee57b2 not found: ID does not exist" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.487193 4826 scope.go:117] "RemoveContainer" containerID="943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9" Jan 31 08:52:01 crc kubenswrapper[4826]: E0131 08:52:01.487526 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9\": container with ID starting with 943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9 not found: ID does not exist" containerID="943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.487557 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9"} err="failed to get container status \"943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9\": rpc error: code = NotFound desc = could not find container \"943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9\": container with ID starting with 943275db72af4c6ebbb17d3827afc2d6172479702895b2952db2decf373f36f9 not found: ID does not exist" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.487575 4826 scope.go:117] "RemoveContainer" containerID="a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd" Jan 31 08:52:01 crc kubenswrapper[4826]: E0131 08:52:01.487818 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd\": container with ID starting with a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd not found: ID does not exist" containerID="a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd" Jan 31 08:52:01 crc kubenswrapper[4826]: I0131 08:52:01.487850 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd"} err="failed to get container status \"a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd\": rpc error: code = NotFound desc = could not find container \"a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd\": container with ID starting with a041fdb4e0f27d2f05d00e2a52b176995d9104afa7ab132d3c1905bbebe3c2fd not found: ID does not exist" Jan 31 08:52:02 crc kubenswrapper[4826]: I0131 08:52:02.821879 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" path="/var/lib/kubelet/pods/8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45/volumes" Jan 31 08:52:09 crc kubenswrapper[4826]: I0131 08:52:09.809942 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:52:09 crc kubenswrapper[4826]: E0131 08:52:09.811298 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.710189 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-spd87"] Jan 31 08:52:11 crc kubenswrapper[4826]: E0131 08:52:11.711060 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" containerName="container-00" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.711080 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" containerName="container-00" Jan 31 08:52:11 crc kubenswrapper[4826]: E0131 08:52:11.711114 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="registry-server" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.711121 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="registry-server" Jan 31 08:52:11 crc kubenswrapper[4826]: E0131 08:52:11.711143 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="extract-utilities" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.711151 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="extract-utilities" Jan 31 08:52:11 crc kubenswrapper[4826]: E0131 08:52:11.711179 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="extract-content" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.711188 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="extract-content" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.711404 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70b57cc-90d4-4430-bdd5-8cbc4e6574b9" containerName="container-00" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.711438 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbc2f52-3d9c-4aae-9d2f-63e055ab1c45" containerName="registry-server" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.713094 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.726034 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spd87"] Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.775250 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-utilities\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.775410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs2z\" (UniqueName: \"kubernetes.io/projected/de32e68c-eff1-4593-b270-dd29ea4b7a00-kube-api-access-vvs2z\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.775535 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-catalog-content\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.877063 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs2z\" (UniqueName: \"kubernetes.io/projected/de32e68c-eff1-4593-b270-dd29ea4b7a00-kube-api-access-vvs2z\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.877151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-catalog-content\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.877306 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-utilities\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.878098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-utilities\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.878111 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-catalog-content\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:11 crc kubenswrapper[4826]: I0131 08:52:11.899716 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs2z\" (UniqueName: \"kubernetes.io/projected/de32e68c-eff1-4593-b270-dd29ea4b7a00-kube-api-access-vvs2z\") pod \"redhat-marketplace-spd87\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:12 crc kubenswrapper[4826]: I0131 08:52:12.084680 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:12 crc kubenswrapper[4826]: I0131 08:52:12.621854 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spd87"] Jan 31 08:52:13 crc kubenswrapper[4826]: I0131 08:52:13.507231 4826 generic.go:334] "Generic (PLEG): container finished" podID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerID="c2ef5d7f9fd763331dd720a2257aae469c6e8c9701f6b91c38a0d2f8c21b8e90" exitCode=0 Jan 31 08:52:13 crc kubenswrapper[4826]: I0131 08:52:13.507285 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spd87" event={"ID":"de32e68c-eff1-4593-b270-dd29ea4b7a00","Type":"ContainerDied","Data":"c2ef5d7f9fd763331dd720a2257aae469c6e8c9701f6b91c38a0d2f8c21b8e90"} Jan 31 08:52:13 crc kubenswrapper[4826]: I0131 08:52:13.507773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spd87" event={"ID":"de32e68c-eff1-4593-b270-dd29ea4b7a00","Type":"ContainerStarted","Data":"e8ca215e5b19b44e6b46653fb24854c4bea952fdd51dcd09bb40480989cc88ea"} Jan 31 08:52:14 crc kubenswrapper[4826]: I0131 08:52:14.522645 4826 generic.go:334] "Generic (PLEG): container finished" podID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerID="e364a16a81e8afa9300bf2d36dbdaeb67ad69e590c0a57e777510979a5997832" exitCode=0 Jan 31 08:52:14 crc kubenswrapper[4826]: I0131 08:52:14.522703 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spd87" event={"ID":"de32e68c-eff1-4593-b270-dd29ea4b7a00","Type":"ContainerDied","Data":"e364a16a81e8afa9300bf2d36dbdaeb67ad69e590c0a57e777510979a5997832"} Jan 31 08:52:16 crc kubenswrapper[4826]: I0131 08:52:16.544339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spd87" event={"ID":"de32e68c-eff1-4593-b270-dd29ea4b7a00","Type":"ContainerStarted","Data":"140d9467da973cc0859395431280de08e4ad46c36dc609375e65edd3484bc59e"} Jan 31 08:52:22 crc kubenswrapper[4826]: I0131 08:52:22.085609 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:22 crc kubenswrapper[4826]: I0131 08:52:22.086094 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:22 crc kubenswrapper[4826]: I0131 08:52:22.135794 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:22 crc kubenswrapper[4826]: I0131 08:52:22.163679 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-spd87" podStartSLOduration=9.732157672 podStartE2EDuration="11.163651397s" podCreationTimestamp="2026-01-31 08:52:11 +0000 UTC" firstStartedPulling="2026-01-31 08:52:13.509762811 +0000 UTC m=+4565.363649170" lastFinishedPulling="2026-01-31 08:52:14.941256536 +0000 UTC m=+4566.795142895" observedRunningTime="2026-01-31 08:52:16.569494387 +0000 UTC m=+4568.423380756" watchObservedRunningTime="2026-01-31 08:52:22.163651397 +0000 UTC m=+4574.017537756" Jan 31 08:52:22 crc kubenswrapper[4826]: I0131 08:52:22.653680 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:23 crc kubenswrapper[4826]: I0131 08:52:23.771790 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spd87"] Jan 31 08:52:23 crc kubenswrapper[4826]: I0131 08:52:23.809530 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:52:23 crc kubenswrapper[4826]: E0131 08:52:23.809837 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:52:24 crc kubenswrapper[4826]: I0131 08:52:24.615161 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-spd87" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="registry-server" containerID="cri-o://140d9467da973cc0859395431280de08e4ad46c36dc609375e65edd3484bc59e" gracePeriod=2 Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.628753 4826 generic.go:334] "Generic (PLEG): container finished" podID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerID="140d9467da973cc0859395431280de08e4ad46c36dc609375e65edd3484bc59e" exitCode=0 Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.628843 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spd87" event={"ID":"de32e68c-eff1-4593-b270-dd29ea4b7a00","Type":"ContainerDied","Data":"140d9467da973cc0859395431280de08e4ad46c36dc609375e65edd3484bc59e"} Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.629610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spd87" event={"ID":"de32e68c-eff1-4593-b270-dd29ea4b7a00","Type":"ContainerDied","Data":"e8ca215e5b19b44e6b46653fb24854c4bea952fdd51dcd09bb40480989cc88ea"} Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.629685 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ca215e5b19b44e6b46653fb24854c4bea952fdd51dcd09bb40480989cc88ea" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.708540 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.822667 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-catalog-content\") pod \"de32e68c-eff1-4593-b270-dd29ea4b7a00\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.822748 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvs2z\" (UniqueName: \"kubernetes.io/projected/de32e68c-eff1-4593-b270-dd29ea4b7a00-kube-api-access-vvs2z\") pod \"de32e68c-eff1-4593-b270-dd29ea4b7a00\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.822868 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-utilities\") pod \"de32e68c-eff1-4593-b270-dd29ea4b7a00\" (UID: \"de32e68c-eff1-4593-b270-dd29ea4b7a00\") " Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.824014 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-utilities" (OuterVolumeSpecName: "utilities") pod "de32e68c-eff1-4593-b270-dd29ea4b7a00" (UID: "de32e68c-eff1-4593-b270-dd29ea4b7a00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.828342 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de32e68c-eff1-4593-b270-dd29ea4b7a00-kube-api-access-vvs2z" (OuterVolumeSpecName: "kube-api-access-vvs2z") pod "de32e68c-eff1-4593-b270-dd29ea4b7a00" (UID: "de32e68c-eff1-4593-b270-dd29ea4b7a00"). InnerVolumeSpecName "kube-api-access-vvs2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.846983 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de32e68c-eff1-4593-b270-dd29ea4b7a00" (UID: "de32e68c-eff1-4593-b270-dd29ea4b7a00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.925035 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.925076 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvs2z\" (UniqueName: \"kubernetes.io/projected/de32e68c-eff1-4593-b270-dd29ea4b7a00-kube-api-access-vvs2z\") on node \"crc\" DevicePath \"\"" Jan 31 08:52:25 crc kubenswrapper[4826]: I0131 08:52:25.925089 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de32e68c-eff1-4593-b270-dd29ea4b7a00-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:52:26 crc kubenswrapper[4826]: I0131 08:52:26.639433 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spd87" Jan 31 08:52:26 crc kubenswrapper[4826]: I0131 08:52:26.689559 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spd87"] Jan 31 08:52:26 crc kubenswrapper[4826]: I0131 08:52:26.702645 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-spd87"] Jan 31 08:52:26 crc kubenswrapper[4826]: E0131 08:52:26.704518 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde32e68c_eff1_4593_b270_dd29ea4b7a00.slice\": RecentStats: unable to find data in memory cache]" Jan 31 08:52:26 crc kubenswrapper[4826]: I0131 08:52:26.823741 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" path="/var/lib/kubelet/pods/de32e68c-eff1-4593-b270-dd29ea4b7a00/volumes" Jan 31 08:52:35 crc kubenswrapper[4826]: I0131 08:52:35.808994 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:52:35 crc kubenswrapper[4826]: E0131 08:52:35.810266 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:52:50 crc kubenswrapper[4826]: I0131 08:52:50.808809 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:52:50 crc kubenswrapper[4826]: E0131 08:52:50.809710 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:52:53 crc kubenswrapper[4826]: I0131 08:52:53.834548 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_e047424f-d695-4ef5-b24f-0150fd27964c/ansibletest-ansibletest/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.069680 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7fbf887dc4-4528v_91ee6dfb-fe40-4e3f-8719-6604432e07f5/barbican-api/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.100138 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7fbf887dc4-4528v_91ee6dfb-fe40-4e3f-8719-6604432e07f5/barbican-api-log/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.252086 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-568c588dfd-kgqtq_2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2/barbican-keystone-listener/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.286994 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-568c588dfd-kgqtq_2aea6e8c-06e5-4b80-8133-e8fe6ca42ed2/barbican-keystone-listener-log/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.395329 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-98b594959-ps2jz_8a2e144b-d873-4386-9328-24f745d25df7/barbican-worker/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.463405 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-98b594959-ps2jz_8a2e144b-d873-4386-9328-24f745d25df7/barbican-worker-log/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.609398 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-95kt6_50c05a80-be37-4c98-964a-7503a3a430a2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.690856 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_460b39a1-e8da-444b-b92c-fb9acec1dd12/ceilometer-central-agent/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.752908 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_460b39a1-e8da-444b-b92c-fb9acec1dd12/ceilometer-notification-agent/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.805764 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_460b39a1-e8da-444b-b92c-fb9acec1dd12/proxy-httpd/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.890676 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_460b39a1-e8da-444b-b92c-fb9acec1dd12/sg-core/0.log" Jan 31 08:52:54 crc kubenswrapper[4826]: I0131 08:52:54.982868 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-4z55w_8ffa923f-5c55-4d65-86bf-a6dbc1fde423/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.311144 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jtzjp_ad853f3b-9633-4c89-baa0-0fa82a9498d7/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.456802 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b2e16f67-d80b-4d2f-9bf0-0ce081212368/cinder-api/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.493875 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b2e16f67-d80b-4d2f-9bf0-0ce081212368/cinder-api-log/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.707065 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421/cinder-backup/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.742672 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_aa50d3e2-0ae6-4ee1-ab02-82e4d51e5421/probe/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.834811 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e740eda2-f125-48ae-8083-8023f7e20b41/cinder-scheduler/0.log" Jan 31 08:52:55 crc kubenswrapper[4826]: I0131 08:52:55.919571 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e740eda2-f125-48ae-8083-8023f7e20b41/probe/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.072684 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4319e2c7-04a1-4612-8efe-c656be3fd234/probe/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.081489 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4319e2c7-04a1-4612-8efe-c656be3fd234/cinder-volume/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.270557 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-n6vc5_3cb4eff5-cd38-4f4f-8450-6a8483f52276/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.290718 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jnkqg_1a82d52f-625e-48d8-b546-9acf6922cbd0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.454247 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-th7rb_bc40b194-0220-45aa-8ddb-cd77f5a0cafb/init/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.613353 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-th7rb_bc40b194-0220-45aa-8ddb-cd77f5a0cafb/init/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.671926 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-th7rb_bc40b194-0220-45aa-8ddb-cd77f5a0cafb/dnsmasq-dns/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.702868 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_36fba9c8-da36-4b64-91f2-ff747c20bee6/glance-httpd/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.853505 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_36fba9c8-da36-4b64-91f2-ff747c20bee6/glance-log/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.910147 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_403c10ff-88fa-4845-aaed-36ccc5cf9dd2/glance-log/0.log" Jan 31 08:52:56 crc kubenswrapper[4826]: I0131 08:52:56.937801 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_403c10ff-88fa-4845-aaed-36ccc5cf9dd2/glance-httpd/0.log" Jan 31 08:52:57 crc kubenswrapper[4826]: I0131 08:52:57.135289 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-97cdc8cb-tdpkc_e0626292-98a9-4e1f-8359-f734ed8a3118/horizon/0.log" Jan 31 08:52:57 crc kubenswrapper[4826]: I0131 08:52:57.351307 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_d18b10f2-cbcf-4b97-825a-30be7171be8f/horizontest-tests-horizontest/0.log" Jan 31 08:52:57 crc kubenswrapper[4826]: I0131 08:52:57.556188 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qqxm6_801bcd0e-4229-479d-9b21-7b6d71339a15/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:57 crc kubenswrapper[4826]: I0131 08:52:57.896876 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-97cdc8cb-tdpkc_e0626292-98a9-4e1f-8359-f734ed8a3118/horizon-log/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.051903 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-9zll8_517832e1-875d-49f4-8e81-7aa6c6b9a7f9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.251900 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7676c745b9-d7652_0abfb696-f207-4a48-983f-bf8b62f453d0/keystone-api/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.338575 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497441-5w8mr_04acf005-673c-4a09-b98d-ab5bb3903c71/keystone-cron/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.355118 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ae3a26b8-4b55-49a9-90a6-e66bb00f1425/kube-state-metrics/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.505693 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-g74tq_cae3795a-7ee0-4ca7-aada-7f03190fb437/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.604380 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0b9ee77c-ee1a-48cf-973f-21437b0df988/manila-api-log/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.707700 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0b9ee77c-ee1a-48cf-973f-21437b0df988/manila-api/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.855772 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_232b622b-129f-47af-895a-667ef009ae88/manila-scheduler/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.938443 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_232b622b-129f-47af-895a-667ef009ae88/probe/0.log" Jan 31 08:52:58 crc kubenswrapper[4826]: I0131 08:52:58.969416 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_62702499-49e3-4ea5-b2da-a4bae827517d/probe/0.log" Jan 31 08:52:59 crc kubenswrapper[4826]: I0131 08:52:59.017945 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_62702499-49e3-4ea5-b2da-a4bae827517d/manila-share/0.log" Jan 31 08:52:59 crc kubenswrapper[4826]: I0131 08:52:59.370683 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-647f745999-xttjx_bed86129-6185-49ee-9e65-0e4767f815fd/neutron-httpd/0.log" Jan 31 08:52:59 crc kubenswrapper[4826]: I0131 08:52:59.458802 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-647f745999-xttjx_bed86129-6185-49ee-9e65-0e4767f815fd/neutron-api/0.log" Jan 31 08:52:59 crc kubenswrapper[4826]: I0131 08:52:59.881787 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7hq_f8db2e50-d73d-4bcd-a2c4-34cfad360222/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:00 crc kubenswrapper[4826]: I0131 08:53:00.282118 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2c8b1d46-e795-45e4-a7cb-b09687e17027/nova-api-log/0.log" Jan 31 08:53:00 crc kubenswrapper[4826]: I0131 08:53:00.396814 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8b210b6f-8e41-4893-9459-2668e1eb96a7/nova-cell0-conductor-conductor/0.log" Jan 31 08:53:00 crc kubenswrapper[4826]: I0131 08:53:00.447918 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2c8b1d46-e795-45e4-a7cb-b09687e17027/nova-api-api/0.log" Jan 31 08:53:00 crc kubenswrapper[4826]: I0131 08:53:00.588862 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7fbb554b-47f1-4266-a6ae-c2d43dd2d692/nova-cell1-conductor-conductor/0.log" Jan 31 08:53:00 crc kubenswrapper[4826]: I0131 08:53:00.804788 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0e0433b9-901b-4383-8d8b-15e5c006da15/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 08:53:00 crc kubenswrapper[4826]: I0131 08:53:00.909908 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-x5x2r_e0375a71-69b8-4909-b359-f6c66a475f79/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:01 crc kubenswrapper[4826]: I0131 08:53:01.053360 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aebb981f-3b13-4115-a5b0-1d4942789f7e/nova-metadata-log/0.log" Jan 31 08:53:01 crc kubenswrapper[4826]: I0131 08:53:01.345943 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_38bbbb8c-80f6-4950-acea-0d800baa1857/nova-scheduler-scheduler/0.log" Jan 31 08:53:01 crc kubenswrapper[4826]: I0131 08:53:01.433590 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c533aa15-c4f0-4198-91fc-fd6e4536091d/mysql-bootstrap/0.log" Jan 31 08:53:01 crc kubenswrapper[4826]: I0131 08:53:01.601037 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c533aa15-c4f0-4198-91fc-fd6e4536091d/mysql-bootstrap/0.log" Jan 31 08:53:01 crc kubenswrapper[4826]: I0131 08:53:01.609709 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c533aa15-c4f0-4198-91fc-fd6e4536091d/galera/0.log" Jan 31 08:53:01 crc kubenswrapper[4826]: I0131 08:53:01.824850 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9c715176-281c-43a4-8e08-7b86520d08da/mysql-bootstrap/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.019160 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9c715176-281c-43a4-8e08-7b86520d08da/mysql-bootstrap/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.068417 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9c715176-281c-43a4-8e08-7b86520d08da/galera/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.247925 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4b637c7b-111d-4820-acc3-9cd5bff101e7/openstackclient/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.345270 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9rw8k_d141a506-fc38-4e03-9923-31895d8c3f34/openstack-network-exporter/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.614392 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5wdmn_f568bc2f-bf3c-463a-9af8-d98de17ac7b6/ovsdb-server-init/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.774011 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aebb981f-3b13-4115-a5b0-1d4942789f7e/nova-metadata-metadata/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.782872 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5wdmn_f568bc2f-bf3c-463a-9af8-d98de17ac7b6/ovsdb-server-init/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.831661 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5wdmn_f568bc2f-bf3c-463a-9af8-d98de17ac7b6/ovs-vswitchd/0.log" Jan 31 08:53:02 crc kubenswrapper[4826]: I0131 08:53:02.832561 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5wdmn_f568bc2f-bf3c-463a-9af8-d98de17ac7b6/ovsdb-server/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.033219 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-p6hcp_c589c873-9e78-4905-ab37-d49329e9c84f/ovn-controller/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.180846 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jfl9h_3fd20e7c-b2ff-4784-86ea-e74db981caca/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.324311 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_17b9480e-d5e0-4478-9f5c-85caf6bb8f0a/ovn-northd/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.328557 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_17b9480e-d5e0-4478-9f5c-85caf6bb8f0a/openstack-network-exporter/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.748001 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6/ovsdbserver-nb/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.763150 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89bea4e4-2b41-40b2-b9f3-52a8f2f1fdb6/openstack-network-exporter/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.891267 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_532f5068-1ff9-449a-b8ed-80986499afb5/openstack-network-exporter/0.log" Jan 31 08:53:03 crc kubenswrapper[4826]: I0131 08:53:03.937229 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_532f5068-1ff9-449a-b8ed-80986499afb5/ovsdbserver-sb/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.111357 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78574fb98-ztjmr_526125ca-e810-4cbb-9b5d-5631848e89e3/placement-api/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.216538 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0ce80f95-b8c4-499e-84c4-aceea6e628fd/setup-container/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.248881 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78574fb98-ztjmr_526125ca-e810-4cbb-9b5d-5631848e89e3/placement-log/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.421941 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0ce80f95-b8c4-499e-84c4-aceea6e628fd/setup-container/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.491876 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0ce80f95-b8c4-499e-84c4-aceea6e628fd/rabbitmq/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.492891 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_db4f48b4-02f0-4f23-a5f8-f024caabed8d/setup-container/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.678863 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_db4f48b4-02f0-4f23-a5f8-f024caabed8d/setup-container/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.796749 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-dhlps_1b469d06-e2d7-4c7e-a61d-b2e76fd42191/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.804236 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_db4f48b4-02f0-4f23-a5f8-f024caabed8d/rabbitmq/0.log" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.809335 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:53:04 crc kubenswrapper[4826]: E0131 08:53:04.809999 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:53:04 crc kubenswrapper[4826]: I0131 08:53:04.951431 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-f5kls_9c52ad2b-5f3f-4dc5-8f0c-fdf1ac4c8347/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.065037 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qpp6k_c74306ca-1c03-4b19-b9cf-173122cdada0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.202544 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4gqmw_6a722cc4-6eab-4740-9776-6cb0ba8e1575/ssh-known-hosts-edpm-deployment/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.395143 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_e068e101-7fa0-42fa-b34b-fb9ba93466aa/tempest-tests-tempest-tests-runner/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.486824 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_c92b3fc7-df90-4a08-bb1b-aebb65e316b8/tempest-tests-tempest-tests-runner/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.570930 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_3e35a6d5-6dc6-45e3-bdee-86dda89b6910/test-operator-logs-container/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.698663 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_3268897c-fd9e-4ee6-8ec2-2d721c0796c6/test-operator-logs-container/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: E0131 08:53:05.808951 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.865711 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_824c9507-e210-4a05-aa63-c07a42d71d3b/test-operator-logs-container/0.log" Jan 31 08:53:05 crc kubenswrapper[4826]: I0131 08:53:05.958441 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_cfa22288-274c-4eac-9718-643e63bd02d4/test-operator-logs-container/0.log" Jan 31 08:53:06 crc kubenswrapper[4826]: I0131 08:53:06.177699 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_30581517-aaf5-4565-a879-47605e56918c/tobiko-tests-tobiko/0.log" Jan 31 08:53:06 crc kubenswrapper[4826]: I0131 08:53:06.208886 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_c571d756-ee78-4c86-9c51-2b5565fcc40e/tobiko-tests-tobiko/0.log" Jan 31 08:53:06 crc kubenswrapper[4826]: I0131 08:53:06.374557 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-prgwg_e2555a90-e246-4467-a217-abd3841e3441/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 08:53:16 crc kubenswrapper[4826]: I0131 08:53:16.816747 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:53:16 crc kubenswrapper[4826]: E0131 08:53:16.817715 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:53:22 crc kubenswrapper[4826]: I0131 08:53:22.743253 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bca5de5c-45fd-4f33-89ab-7c2f3296a8be/memcached/0.log" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.076837 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8f99h"] Jan 31 08:53:27 crc kubenswrapper[4826]: E0131 08:53:27.080282 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="extract-content" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.080401 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="extract-content" Jan 31 08:53:27 crc kubenswrapper[4826]: E0131 08:53:27.080503 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="registry-server" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.080581 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="registry-server" Jan 31 08:53:27 crc kubenswrapper[4826]: E0131 08:53:27.080685 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="extract-utilities" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.080764 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="extract-utilities" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.081165 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="de32e68c-eff1-4593-b270-dd29ea4b7a00" containerName="registry-server" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.084904 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.092509 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f99h"] Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.250469 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmxm\" (UniqueName: \"kubernetes.io/projected/b5b3f7bb-85c1-45cf-b35b-151b207665f2-kube-api-access-gpmxm\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.250562 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-catalog-content\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.250585 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-utilities\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.352582 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmxm\" (UniqueName: \"kubernetes.io/projected/b5b3f7bb-85c1-45cf-b35b-151b207665f2-kube-api-access-gpmxm\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.352718 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-catalog-content\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.352750 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-utilities\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.353351 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-catalog-content\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.353383 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-utilities\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.384829 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmxm\" (UniqueName: \"kubernetes.io/projected/b5b3f7bb-85c1-45cf-b35b-151b207665f2-kube-api-access-gpmxm\") pod \"community-operators-8f99h\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:27 crc kubenswrapper[4826]: I0131 08:53:27.418701 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:28 crc kubenswrapper[4826]: I0131 08:53:28.010946 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f99h"] Jan 31 08:53:28 crc kubenswrapper[4826]: I0131 08:53:28.457951 4826 generic.go:334] "Generic (PLEG): container finished" podID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerID="f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46" exitCode=0 Jan 31 08:53:28 crc kubenswrapper[4826]: I0131 08:53:28.458486 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerDied","Data":"f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46"} Jan 31 08:53:28 crc kubenswrapper[4826]: I0131 08:53:28.458543 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerStarted","Data":"018f27859b90ef5c52588990bdad6d7c2f8c7a18f6709a58ee107823aa28888d"} Jan 31 08:53:29 crc kubenswrapper[4826]: I0131 08:53:29.468771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerStarted","Data":"4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b"} Jan 31 08:53:30 crc kubenswrapper[4826]: I0131 08:53:30.479257 4826 generic.go:334] "Generic (PLEG): container finished" podID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerID="4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b" exitCode=0 Jan 31 08:53:30 crc kubenswrapper[4826]: I0131 08:53:30.479566 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerDied","Data":"4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b"} Jan 31 08:53:30 crc kubenswrapper[4826]: I0131 08:53:30.809956 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:53:30 crc kubenswrapper[4826]: E0131 08:53:30.810903 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:53:31 crc kubenswrapper[4826]: I0131 08:53:31.493831 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerStarted","Data":"022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7"} Jan 31 08:53:36 crc kubenswrapper[4826]: I0131 08:53:36.704819 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/util/0.log" Jan 31 08:53:36 crc kubenswrapper[4826]: I0131 08:53:36.906822 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/pull/0.log" Jan 31 08:53:36 crc kubenswrapper[4826]: I0131 08:53:36.922606 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/util/0.log" Jan 31 08:53:36 crc kubenswrapper[4826]: I0131 08:53:36.923061 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/pull/0.log" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.133176 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/pull/0.log" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.157515 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/util/0.log" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.178500 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d61f90025f1161856717435dfbf35c1c3e1b3b21e804d05fd9f09167dqnl5q_5033093f-2406-4bad-82e4-2b72dec635f5/extract/0.log" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.418837 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.419209 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.798719 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.851704 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8f99h" podStartSLOduration=8.41902045 podStartE2EDuration="10.851679857s" podCreationTimestamp="2026-01-31 08:53:27 +0000 UTC" firstStartedPulling="2026-01-31 08:53:28.459929834 +0000 UTC m=+4640.313816193" lastFinishedPulling="2026-01-31 08:53:30.892589241 +0000 UTC m=+4642.746475600" observedRunningTime="2026-01-31 08:53:31.520598898 +0000 UTC m=+4643.374485257" watchObservedRunningTime="2026-01-31 08:53:37.851679857 +0000 UTC m=+4649.705566216" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.853747 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:37 crc kubenswrapper[4826]: I0131 08:53:37.995922 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-469df_4ad0581d-4c4f-45b8-b274-cba147fb1f0f/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.038609 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f99h"] Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.057866 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-fzkjg_bbc23de1-ea6a-4ec2-acb0-4ce5b7a6260a/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.211825 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-vrsm8_2ca6db75-d35b-4d27-afb7-45698c422257/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.235437 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-855qv_bbea7ad0-1edf-41aa-b677-bcaf9ccf72a3/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.415333 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-bd4z5_61b6715e-7da4-4f70-8e51-e4cc36c046f6/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.505553 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-c6qzg_ae965697-1a1d-498a-be01-35faefac5df1/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.691778 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fbkx2_19e5f098-b188-4252-8e1f-8db1f38dbb75/manager/0.log" Jan 31 08:53:38 crc kubenswrapper[4826]: I0131 08:53:38.930229 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-tdbcd_f7d778f6-12f8-4d10-b106-579471ac576f/manager/0.log" Jan 31 08:53:39 crc kubenswrapper[4826]: I0131 08:53:39.012632 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-fn74b_08a289e4-ca1e-4687-834a-941d23f7f292/manager/0.log" Jan 31 08:53:39 crc kubenswrapper[4826]: I0131 08:53:39.563180 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8f99h" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="registry-server" containerID="cri-o://022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7" gracePeriod=2 Jan 31 08:53:39 crc kubenswrapper[4826]: I0131 08:53:39.642404 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-k9b2p_516edc1f-8934-408f-a3f2-15e35f0de6bc/manager/0.log" Jan 31 08:53:39 crc kubenswrapper[4826]: I0131 08:53:39.680035 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-tmrnj_34af55a7-61a5-41e7-a2da-7c631d075cb0/manager/0.log" Jan 31 08:53:39 crc kubenswrapper[4826]: I0131 08:53:39.924492 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-x226q_fbd1ba04-6613-4a12-9009-088ebba2e643/manager/0.log" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.147493 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.159705 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-p2jb4_8b2f20a7-2570-4a15-b86c-bdfdbd69c529/manager/0.log" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.203219 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-catalog-content\") pod \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.203420 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpmxm\" (UniqueName: \"kubernetes.io/projected/b5b3f7bb-85c1-45cf-b35b-151b207665f2-kube-api-access-gpmxm\") pod \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.203533 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-utilities\") pod \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\" (UID: \"b5b3f7bb-85c1-45cf-b35b-151b207665f2\") " Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.205611 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-utilities" (OuterVolumeSpecName: "utilities") pod "b5b3f7bb-85c1-45cf-b35b-151b207665f2" (UID: "b5b3f7bb-85c1-45cf-b35b-151b207665f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.220307 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b3f7bb-85c1-45cf-b35b-151b207665f2-kube-api-access-gpmxm" (OuterVolumeSpecName: "kube-api-access-gpmxm") pod "b5b3f7bb-85c1-45cf-b35b-151b207665f2" (UID: "b5b3f7bb-85c1-45cf-b35b-151b207665f2"). InnerVolumeSpecName "kube-api-access-gpmxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.281055 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-7w6rc_8240a7c4-2e26-46f9-9c1d-0d1d9951c2fb/manager/0.log" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.291269 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5b3f7bb-85c1-45cf-b35b-151b207665f2" (UID: "b5b3f7bb-85c1-45cf-b35b-151b207665f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.305735 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.305762 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpmxm\" (UniqueName: \"kubernetes.io/projected/b5b3f7bb-85c1-45cf-b35b-151b207665f2-kube-api-access-gpmxm\") on node \"crc\" DevicePath \"\"" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.305773 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b3f7bb-85c1-45cf-b35b-151b207665f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.411042 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d7flmn_724a7dc5-6b24-44fa-a35a-4aea83f023c7/manager/0.log" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.601889 4826 generic.go:334] "Generic (PLEG): container finished" podID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerID="022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7" exitCode=0 Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.601940 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerDied","Data":"022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7"} Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.602010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f99h" event={"ID":"b5b3f7bb-85c1-45cf-b35b-151b207665f2","Type":"ContainerDied","Data":"018f27859b90ef5c52588990bdad6d7c2f8c7a18f6709a58ee107823aa28888d"} Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.602037 4826 scope.go:117] "RemoveContainer" containerID="022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.602248 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f99h" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.655326 4826 scope.go:117] "RemoveContainer" containerID="4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.679218 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f99h"] Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.693937 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8f99h"] Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.697192 4826 scope.go:117] "RemoveContainer" containerID="f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.735027 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5ffcf8f8f6-8hdbv_3aa77b6e-e6e9-41c5-8217-10a290abd18a/operator/0.log" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.740110 4826 scope.go:117] "RemoveContainer" containerID="022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7" Jan 31 08:53:40 crc kubenswrapper[4826]: E0131 08:53:40.740666 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7\": container with ID starting with 022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7 not found: ID does not exist" containerID="022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.740703 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7"} err="failed to get container status \"022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7\": rpc error: code = NotFound desc = could not find container \"022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7\": container with ID starting with 022471f62c2f2d3977ff35c4363efc28a18c7b1abf22b0989f56ba6763cc24c7 not found: ID does not exist" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.740731 4826 scope.go:117] "RemoveContainer" containerID="4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b" Jan 31 08:53:40 crc kubenswrapper[4826]: E0131 08:53:40.741042 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b\": container with ID starting with 4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b not found: ID does not exist" containerID="4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.741066 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b"} err="failed to get container status \"4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b\": rpc error: code = NotFound desc = could not find container \"4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b\": container with ID starting with 4c2f975152411a35fd0038f2fd8cd3753fd20de8beac11ffab7900c3e079117b not found: ID does not exist" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.741086 4826 scope.go:117] "RemoveContainer" containerID="f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46" Jan 31 08:53:40 crc kubenswrapper[4826]: E0131 08:53:40.741315 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46\": container with ID starting with f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46 not found: ID does not exist" containerID="f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.741342 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46"} err="failed to get container status \"f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46\": rpc error: code = NotFound desc = could not find container \"f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46\": container with ID starting with f4c3dcf257852179468a32371394a95ce52624f6413cc549664c1220b5758f46 not found: ID does not exist" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.832319 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" path="/var/lib/kubelet/pods/b5b3f7bb-85c1-45cf-b35b-151b207665f2/volumes" Jan 31 08:53:40 crc kubenswrapper[4826]: I0131 08:53:40.904680 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-znrb7_64897259-2c2d-4152-83eb-17362544f024/registry-server/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.097244 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-gt2wd_999edaa2-f097-4789-a458-b309a42124a5/manager/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.233609 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-r6php_9e29b881-a227-4aef-888c-6676c6cf16b0/manager/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.410919 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mwjwt_629d2057-3ccd-4983-882e-dde1edea2075/operator/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.466177 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-qrp6z_91440ebf-dd66-4fd4-a4c4-b027138ad77c/manager/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.712359 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-mhhr4_76ca8b22-18bd-4ba3-9512-290a5165c6a7/manager/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.797795 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-84d775b94d-x84xp_d14779ba-ccf1-4273-90ae-241c5c59c64f/manager/0.log" Jan 31 08:53:41 crc kubenswrapper[4826]: I0131 08:53:41.935513 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-6jgcp_f5191711-6f67-4b90-b21b-ee7e0acbd554/manager/0.log" Jan 31 08:53:42 crc kubenswrapper[4826]: I0131 08:53:42.168393 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-656b74655f-lrz2z_bab047ef-9486-43b9-adad-edaefe7952b9/manager/0.log" Jan 31 08:53:43 crc kubenswrapper[4826]: I0131 08:53:43.809323 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:53:43 crc kubenswrapper[4826]: E0131 08:53:43.809884 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:53:56 crc kubenswrapper[4826]: I0131 08:53:56.808926 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:53:56 crc kubenswrapper[4826]: E0131 08:53:56.811646 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:54:00 crc kubenswrapper[4826]: I0131 08:54:00.922519 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-94k2w_d35202e5-599d-4dae-b3db-0ae1a99416c2/control-plane-machine-set-operator/0.log" Jan 31 08:54:01 crc kubenswrapper[4826]: I0131 08:54:01.111541 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ngf5x_13fd6a2b-6076-4bd6-8bc2-466b802bdde4/machine-api-operator/0.log" Jan 31 08:54:01 crc kubenswrapper[4826]: I0131 08:54:01.163402 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ngf5x_13fd6a2b-6076-4bd6-8bc2-466b802bdde4/kube-rbac-proxy/0.log" Jan 31 08:54:11 crc kubenswrapper[4826]: I0131 08:54:11.809083 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:54:11 crc kubenswrapper[4826]: E0131 08:54:11.809878 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:54:14 crc kubenswrapper[4826]: I0131 08:54:14.376061 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jk7fq_32b18e30-bc25-4bf7-8297-7fb8af9262f1/cert-manager-controller/0.log" Jan 31 08:54:14 crc kubenswrapper[4826]: I0131 08:54:14.529401 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-h6q4m_096ce775-64cb-4654-9853-b989068756fb/cert-manager-cainjector/0.log" Jan 31 08:54:14 crc kubenswrapper[4826]: I0131 08:54:14.549335 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-m7jsx_bf4d0c6e-9e50-45df-bceb-12a9f8b1b908/cert-manager-webhook/0.log" Jan 31 08:54:22 crc kubenswrapper[4826]: I0131 08:54:22.810333 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:54:22 crc kubenswrapper[4826]: E0131 08:54:22.811436 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 08:54:23 crc kubenswrapper[4826]: E0131 08:54:23.809671 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:54:27 crc kubenswrapper[4826]: I0131 08:54:27.688218 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-778w9_ccde3ca0-fa50-4a94-a1d3-5e9017e8cdf1/nmstate-console-plugin/0.log" Jan 31 08:54:27 crc kubenswrapper[4826]: I0131 08:54:27.888760 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djs56_7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f/kube-rbac-proxy/0.log" Jan 31 08:54:27 crc kubenswrapper[4826]: I0131 08:54:27.908754 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2cq5c_a87f432f-3723-4583-9e46-88b0fd950be3/nmstate-handler/0.log" Jan 31 08:54:28 crc kubenswrapper[4826]: I0131 08:54:28.022944 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djs56_7920d8f0-e7dd-4f3b-aa42-5301bf4ffa3f/nmstate-metrics/0.log" Jan 31 08:54:28 crc kubenswrapper[4826]: I0131 08:54:28.099061 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-sqg84_9ee973e5-15c8-45b0-80b2-66e250ef5275/nmstate-operator/0.log" Jan 31 08:54:28 crc kubenswrapper[4826]: I0131 08:54:28.234432 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-b5djj_4512b6ab-53d0-435f-bdfc-5f28ba454fd6/nmstate-webhook/0.log" Jan 31 08:54:37 crc kubenswrapper[4826]: I0131 08:54:37.808991 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:54:38 crc kubenswrapper[4826]: I0131 08:54:38.098041 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"298fd71e1f50da2903fe3c35e25132b8199d5467001dda823c983b7ade8544ac"} Jan 31 08:54:56 crc kubenswrapper[4826]: I0131 08:54:56.634604 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5fjcs_b772a768-496d-4cab-9480-e2f5966c417b/kube-rbac-proxy/0.log" Jan 31 08:54:56 crc kubenswrapper[4826]: I0131 08:54:56.827626 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5fjcs_b772a768-496d-4cab-9480-e2f5966c417b/controller/0.log" Jan 31 08:54:56 crc kubenswrapper[4826]: I0131 08:54:56.867771 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-frr-files/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.115487 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-frr-files/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.122459 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-reloader/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.130834 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-metrics/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.142269 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-reloader/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.302457 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-frr-files/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.320827 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-reloader/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.345735 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-metrics/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.358766 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-metrics/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.538571 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-reloader/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.539176 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-frr-files/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.540529 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/cp-metrics/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.571856 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/controller/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.728764 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/kube-rbac-proxy/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.748217 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/frr-metrics/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.782616 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/kube-rbac-proxy-frr/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.962508 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/reloader/0.log" Jan 31 08:54:57 crc kubenswrapper[4826]: I0131 08:54:57.978704 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-blwgn_6e2beca9-fa5b-4978-b763-1b9a9283e8fc/frr-k8s-webhook-server/0.log" Jan 31 08:54:58 crc kubenswrapper[4826]: I0131 08:54:58.293106 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bdc44d88d-c4tc5_db79dc26-9fad-4b84-83bb-215331a5483a/manager/0.log" Jan 31 08:54:58 crc kubenswrapper[4826]: I0131 08:54:58.435677 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-797dbbd75c-cvsqs_17cb81be-ee8b-4a61-86ed-569e1a552d3e/webhook-server/0.log" Jan 31 08:54:58 crc kubenswrapper[4826]: I0131 08:54:58.566684 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lzpb4_c9099222-adcc-4af7-892f-7c28ea834fda/kube-rbac-proxy/0.log" Jan 31 08:54:59 crc kubenswrapper[4826]: I0131 08:54:59.133781 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lzpb4_c9099222-adcc-4af7-892f-7c28ea834fda/speaker/0.log" Jan 31 08:54:59 crc kubenswrapper[4826]: I0131 08:54:59.487532 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6wcdg_d84d4444-e704-4e8e-beb5-38ad127d66d8/frr/0.log" Jan 31 08:55:12 crc kubenswrapper[4826]: I0131 08:55:12.467405 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/util/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.121527 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/util/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.142425 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/pull/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.164738 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/pull/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.299729 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/util/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.328277 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/extract/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.382489 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gc5z_59e8fc46-08a4-470a-be23-f53dbd0831d0/pull/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.479481 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/util/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.656901 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/util/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.683291 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/pull/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.704763 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/pull/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.842667 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/extract/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.882728 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/util/0.log" Jan 31 08:55:13 crc kubenswrapper[4826]: I0131 08:55:13.886829 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lxr86_417dc312-217b-4eaf-9b1f-a4145c73f920/pull/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.026913 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/extract-utilities/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.221025 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/extract-content/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.244853 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/extract-utilities/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.245414 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/extract-content/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.440577 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/extract-utilities/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.453150 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/extract-content/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.647410 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/extract-utilities/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.930502 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/extract-utilities/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.979273 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpf7p_095ed56c-d5dd-468f-85b2-f0bf23c2370d/registry-server/0.log" Jan 31 08:55:14 crc kubenswrapper[4826]: I0131 08:55:14.984218 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/extract-content/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.013451 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/extract-content/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.150162 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/extract-content/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.213465 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/extract-utilities/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.406091 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g6562_04b26ab3-b358-4fb0-b6ac-8043a19ce1a9/marketplace-operator/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.609098 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/extract-utilities/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.708170 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/extract-content/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.710578 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh7qz_522ba915-3cf7-4e84-8ada-eae39676ac2b/registry-server/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.727882 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/extract-utilities/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.832670 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/extract-content/0.log" Jan 31 08:55:15 crc kubenswrapper[4826]: I0131 08:55:15.973504 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/extract-utilities/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.006632 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/extract-content/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.159271 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b6k2p_2afad1ec-bc9f-48e1-9e8a-399fbde8bc28/registry-server/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.159949 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/extract-utilities/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.388195 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/extract-content/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.390315 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/extract-utilities/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.395623 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/extract-content/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.578265 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/extract-utilities/0.log" Jan 31 08:55:16 crc kubenswrapper[4826]: I0131 08:55:16.594249 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/extract-content/0.log" Jan 31 08:55:17 crc kubenswrapper[4826]: I0131 08:55:17.089598 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nghjp_5cf2e4f6-5f98-449a-a380-946edc1521f1/registry-server/0.log" Jan 31 08:55:25 crc kubenswrapper[4826]: E0131 08:55:25.809577 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:56:31 crc kubenswrapper[4826]: E0131 08:56:31.809452 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:56:57 crc kubenswrapper[4826]: I0131 08:56:57.377113 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:56:57 crc kubenswrapper[4826]: I0131 08:56:57.377724 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:57:04 crc kubenswrapper[4826]: I0131 08:57:04.449497 4826 generic.go:334] "Generic (PLEG): container finished" podID="dedbf67b-784c-4411-b680-87adeee09404" containerID="1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb" exitCode=0 Jan 31 08:57:04 crc kubenswrapper[4826]: I0131 08:57:04.449571 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" event={"ID":"dedbf67b-784c-4411-b680-87adeee09404","Type":"ContainerDied","Data":"1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb"} Jan 31 08:57:04 crc kubenswrapper[4826]: I0131 08:57:04.451364 4826 scope.go:117] "RemoveContainer" containerID="1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb" Jan 31 08:57:05 crc kubenswrapper[4826]: I0131 08:57:05.020822 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vxtgz_must-gather-jdrpb_dedbf67b-784c-4411-b680-87adeee09404/gather/0.log" Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.509403 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vxtgz/must-gather-jdrpb"] Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.510363 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="copy" containerID="cri-o://9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4" gracePeriod=2 Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.520589 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vxtgz/must-gather-jdrpb"] Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.959049 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vxtgz_must-gather-jdrpb_dedbf67b-784c-4411-b680-87adeee09404/copy/0.log" Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.959905 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.968699 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dedbf67b-784c-4411-b680-87adeee09404-must-gather-output\") pod \"dedbf67b-784c-4411-b680-87adeee09404\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " Jan 31 08:57:12 crc kubenswrapper[4826]: I0131 08:57:12.968945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5fzk\" (UniqueName: \"kubernetes.io/projected/dedbf67b-784c-4411-b680-87adeee09404-kube-api-access-k5fzk\") pod \"dedbf67b-784c-4411-b680-87adeee09404\" (UID: \"dedbf67b-784c-4411-b680-87adeee09404\") " Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.147797 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedbf67b-784c-4411-b680-87adeee09404-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dedbf67b-784c-4411-b680-87adeee09404" (UID: "dedbf67b-784c-4411-b680-87adeee09404"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.173203 4826 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dedbf67b-784c-4411-b680-87adeee09404-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.455189 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedbf67b-784c-4411-b680-87adeee09404-kube-api-access-k5fzk" (OuterVolumeSpecName: "kube-api-access-k5fzk") pod "dedbf67b-784c-4411-b680-87adeee09404" (UID: "dedbf67b-784c-4411-b680-87adeee09404"). InnerVolumeSpecName "kube-api-access-k5fzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.483144 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5fzk\" (UniqueName: \"kubernetes.io/projected/dedbf67b-784c-4411-b680-87adeee09404-kube-api-access-k5fzk\") on node \"crc\" DevicePath \"\"" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.569390 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vxtgz_must-gather-jdrpb_dedbf67b-784c-4411-b680-87adeee09404/copy/0.log" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.573448 4826 generic.go:334] "Generic (PLEG): container finished" podID="dedbf67b-784c-4411-b680-87adeee09404" containerID="9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4" exitCode=143 Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.573508 4826 scope.go:117] "RemoveContainer" containerID="9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.573670 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vxtgz/must-gather-jdrpb" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.594595 4826 scope.go:117] "RemoveContainer" containerID="1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.685483 4826 scope.go:117] "RemoveContainer" containerID="9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4" Jan 31 08:57:13 crc kubenswrapper[4826]: E0131 08:57:13.688536 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4\": container with ID starting with 9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4 not found: ID does not exist" containerID="9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.688586 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4"} err="failed to get container status \"9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4\": rpc error: code = NotFound desc = could not find container \"9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4\": container with ID starting with 9e12092bf471b2219499190d4be774be5d4dc64bb8f2996c58a3f3d12adfe9f4 not found: ID does not exist" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.688620 4826 scope.go:117] "RemoveContainer" containerID="1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb" Jan 31 08:57:13 crc kubenswrapper[4826]: E0131 08:57:13.688981 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb\": container with ID starting with 1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb not found: ID does not exist" containerID="1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb" Jan 31 08:57:13 crc kubenswrapper[4826]: I0131 08:57:13.689009 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb"} err="failed to get container status \"1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb\": rpc error: code = NotFound desc = could not find container \"1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb\": container with ID starting with 1440e9ddb09790e97272afac93a2bb41049dd1ec1b2c76ffa88a3d6af6ab8fbb not found: ID does not exist" Jan 31 08:57:14 crc kubenswrapper[4826]: I0131 08:57:14.833004 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedbf67b-784c-4411-b680-87adeee09404" path="/var/lib/kubelet/pods/dedbf67b-784c-4411-b680-87adeee09404/volumes" Jan 31 08:57:27 crc kubenswrapper[4826]: I0131 08:57:27.376734 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:57:27 crc kubenswrapper[4826]: I0131 08:57:27.377401 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:57:34 crc kubenswrapper[4826]: I0131 08:57:34.756480 4826 scope.go:117] "RemoveContainer" containerID="58db2343c658b6e0e9b8a3d3b6b64a6e50b70fc136c46e7c71a67d9c133ddc9d" Jan 31 08:57:43 crc kubenswrapper[4826]: E0131 08:57:43.810247 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:57:57 crc kubenswrapper[4826]: I0131 08:57:57.377480 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:57:57 crc kubenswrapper[4826]: I0131 08:57:57.379207 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 08:57:57 crc kubenswrapper[4826]: I0131 08:57:57.379341 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 08:57:57 crc kubenswrapper[4826]: I0131 08:57:57.380611 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"298fd71e1f50da2903fe3c35e25132b8199d5467001dda823c983b7ade8544ac"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 08:57:57 crc kubenswrapper[4826]: I0131 08:57:57.380686 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://298fd71e1f50da2903fe3c35e25132b8199d5467001dda823c983b7ade8544ac" gracePeriod=600 Jan 31 08:57:58 crc kubenswrapper[4826]: I0131 08:57:58.087008 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="298fd71e1f50da2903fe3c35e25132b8199d5467001dda823c983b7ade8544ac" exitCode=0 Jan 31 08:57:58 crc kubenswrapper[4826]: I0131 08:57:58.087074 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"298fd71e1f50da2903fe3c35e25132b8199d5467001dda823c983b7ade8544ac"} Jan 31 08:57:58 crc kubenswrapper[4826]: I0131 08:57:58.087451 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerStarted","Data":"71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7"} Jan 31 08:57:58 crc kubenswrapper[4826]: I0131 08:57:58.087479 4826 scope.go:117] "RemoveContainer" containerID="7eba4a1e5e418a5b89ba4b523b59b3f4c42351dd83f53bcdc9e5ec984ec59162" Jan 31 08:58:34 crc kubenswrapper[4826]: I0131 08:58:34.857211 4826 scope.go:117] "RemoveContainer" containerID="c2ef5d7f9fd763331dd720a2257aae469c6e8c9701f6b91c38a0d2f8c21b8e90" Jan 31 08:58:34 crc kubenswrapper[4826]: I0131 08:58:34.894528 4826 scope.go:117] "RemoveContainer" containerID="e364a16a81e8afa9300bf2d36dbdaeb67ad69e590c0a57e777510979a5997832" Jan 31 08:58:34 crc kubenswrapper[4826]: I0131 08:58:34.979998 4826 scope.go:117] "RemoveContainer" containerID="140d9467da973cc0859395431280de08e4ad46c36dc609375e65edd3484bc59e" Jan 31 08:58:46 crc kubenswrapper[4826]: E0131 08:58:46.809220 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 08:59:57 crc kubenswrapper[4826]: I0131 08:59:57.376994 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 08:59:57 crc kubenswrapper[4826]: I0131 08:59:57.377698 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.157350 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg"] Jan 31 09:00:00 crc kubenswrapper[4826]: E0131 09:00:00.158484 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="extract-utilities" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158501 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="extract-utilities" Jan 31 09:00:00 crc kubenswrapper[4826]: E0131 09:00:00.158524 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="extract-content" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158533 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="extract-content" Jan 31 09:00:00 crc kubenswrapper[4826]: E0131 09:00:00.158552 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="gather" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158562 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="gather" Jan 31 09:00:00 crc kubenswrapper[4826]: E0131 09:00:00.158584 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="registry-server" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158605 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="registry-server" Jan 31 09:00:00 crc kubenswrapper[4826]: E0131 09:00:00.158619 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="copy" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158627 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="copy" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158863 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="copy" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158885 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b3f7bb-85c1-45cf-b35b-151b207665f2" containerName="registry-server" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.158896 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedbf67b-784c-4411-b680-87adeee09404" containerName="gather" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.159711 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.162091 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.162566 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.170406 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg"] Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.215590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-config-volume\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.216014 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m8sq\" (UniqueName: \"kubernetes.io/projected/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-kube-api-access-2m8sq\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.216353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-secret-volume\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.317984 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-config-volume\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.318184 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m8sq\" (UniqueName: \"kubernetes.io/projected/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-kube-api-access-2m8sq\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.318230 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-secret-volume\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.319398 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-config-volume\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.325749 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-secret-volume\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.343099 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m8sq\" (UniqueName: \"kubernetes.io/projected/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-kube-api-access-2m8sq\") pod \"collect-profiles-29497500-lhdkg\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.485757 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:00 crc kubenswrapper[4826]: I0131 09:00:00.984255 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg"] Jan 31 09:00:01 crc kubenswrapper[4826]: I0131 09:00:01.427366 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" event={"ID":"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c","Type":"ContainerStarted","Data":"992de339d34fb0426a1e0f334d738dce9cbba189e477b7cedca2e3faef19c329"} Jan 31 09:00:01 crc kubenswrapper[4826]: I0131 09:00:01.427720 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" event={"ID":"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c","Type":"ContainerStarted","Data":"900f90facb1024cf8b923d246da42a60d2150b4544263754876600f9ab76c923"} Jan 31 09:00:01 crc kubenswrapper[4826]: I0131 09:00:01.453078 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" podStartSLOduration=1.453055526 podStartE2EDuration="1.453055526s" podCreationTimestamp="2026-01-31 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:00:01.440479282 +0000 UTC m=+5033.294365641" watchObservedRunningTime="2026-01-31 09:00:01.453055526 +0000 UTC m=+5033.306941895" Jan 31 09:00:02 crc kubenswrapper[4826]: I0131 09:00:02.437206 4826 generic.go:334] "Generic (PLEG): container finished" podID="e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" containerID="992de339d34fb0426a1e0f334d738dce9cbba189e477b7cedca2e3faef19c329" exitCode=0 Jan 31 09:00:02 crc kubenswrapper[4826]: I0131 09:00:02.437246 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" event={"ID":"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c","Type":"ContainerDied","Data":"992de339d34fb0426a1e0f334d738dce9cbba189e477b7cedca2e3faef19c329"} Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.795689 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.892772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-secret-volume\") pod \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.893014 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m8sq\" (UniqueName: \"kubernetes.io/projected/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-kube-api-access-2m8sq\") pod \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.893050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-config-volume\") pod \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\" (UID: \"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c\") " Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.893928 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" (UID: "e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.900398 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" (UID: "e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.904251 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-kube-api-access-2m8sq" (OuterVolumeSpecName: "kube-api-access-2m8sq") pod "e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" (UID: "e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c"). InnerVolumeSpecName "kube-api-access-2m8sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.996369 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.996427 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:03 crc kubenswrapper[4826]: I0131 09:00:03.996447 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m8sq\" (UniqueName: \"kubernetes.io/projected/e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c-kube-api-access-2m8sq\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:04 crc kubenswrapper[4826]: I0131 09:00:04.458855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" event={"ID":"e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c","Type":"ContainerDied","Data":"900f90facb1024cf8b923d246da42a60d2150b4544263754876600f9ab76c923"} Jan 31 09:00:04 crc kubenswrapper[4826]: I0131 09:00:04.459208 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900f90facb1024cf8b923d246da42a60d2150b4544263754876600f9ab76c923" Jan 31 09:00:04 crc kubenswrapper[4826]: I0131 09:00:04.458944 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-lhdkg" Jan 31 09:00:04 crc kubenswrapper[4826]: I0131 09:00:04.531210 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9"] Jan 31 09:00:04 crc kubenswrapper[4826]: I0131 09:00:04.548615 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497455-jzhb9"] Jan 31 09:00:04 crc kubenswrapper[4826]: I0131 09:00:04.825369 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92b8804-07a2-4ac5-b431-fcd9f2fbeacb" path="/var/lib/kubelet/pods/b92b8804-07a2-4ac5-b431-fcd9f2fbeacb/volumes" Jan 31 09:00:14 crc kubenswrapper[4826]: E0131 09:00:14.809176 4826 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Jan 31 09:00:27 crc kubenswrapper[4826]: I0131 09:00:27.377500 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:00:27 crc kubenswrapper[4826]: I0131 09:00:27.378215 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:00:35 crc kubenswrapper[4826]: I0131 09:00:35.079006 4826 scope.go:117] "RemoveContainer" containerID="08d32ba8cf7bd36f3adf0668b5772a531a2d4483937fac3309abff0316b66267" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.281761 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tt9xr"] Jan 31 09:00:39 crc kubenswrapper[4826]: E0131 09:00:39.282761 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" containerName="collect-profiles" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.282776 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" containerName="collect-profiles" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.283027 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f2a8a0-9af9-4ca6-b1ec-c9857e61916c" containerName="collect-profiles" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.284595 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.297304 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tt9xr"] Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.426826 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-catalog-content\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.426987 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-utilities\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.427106 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48tb\" (UniqueName: \"kubernetes.io/projected/4bfa9401-2721-452d-a787-123f39cd4eb3-kube-api-access-s48tb\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.528839 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-catalog-content\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.528981 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-utilities\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.529074 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48tb\" (UniqueName: \"kubernetes.io/projected/4bfa9401-2721-452d-a787-123f39cd4eb3-kube-api-access-s48tb\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.529284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-catalog-content\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.529376 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-utilities\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.649262 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48tb\" (UniqueName: \"kubernetes.io/projected/4bfa9401-2721-452d-a787-123f39cd4eb3-kube-api-access-s48tb\") pod \"certified-operators-tt9xr\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:39 crc kubenswrapper[4826]: I0131 09:00:39.904659 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:40 crc kubenswrapper[4826]: I0131 09:00:40.451401 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tt9xr"] Jan 31 09:00:40 crc kubenswrapper[4826]: I0131 09:00:40.869594 4826 generic.go:334] "Generic (PLEG): container finished" podID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerID="21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc" exitCode=0 Jan 31 09:00:40 crc kubenswrapper[4826]: I0131 09:00:40.869743 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerDied","Data":"21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc"} Jan 31 09:00:40 crc kubenswrapper[4826]: I0131 09:00:40.870032 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerStarted","Data":"040a2cd5349ef94f0ca8216f36b87b68c511f217cf473de97f884a0836353796"} Jan 31 09:00:40 crc kubenswrapper[4826]: I0131 09:00:40.872461 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:00:42 crc kubenswrapper[4826]: I0131 09:00:42.890158 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerStarted","Data":"f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1"} Jan 31 09:00:43 crc kubenswrapper[4826]: I0131 09:00:43.901006 4826 generic.go:334] "Generic (PLEG): container finished" podID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerID="f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1" exitCode=0 Jan 31 09:00:43 crc kubenswrapper[4826]: I0131 09:00:43.901090 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerDied","Data":"f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1"} Jan 31 09:00:45 crc kubenswrapper[4826]: I0131 09:00:45.938474 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerStarted","Data":"823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4"} Jan 31 09:00:45 crc kubenswrapper[4826]: I0131 09:00:45.984058 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tt9xr" podStartSLOduration=3.233723618 podStartE2EDuration="6.984032445s" podCreationTimestamp="2026-01-31 09:00:39 +0000 UTC" firstStartedPulling="2026-01-31 09:00:40.8720696 +0000 UTC m=+5072.725955989" lastFinishedPulling="2026-01-31 09:00:44.622378447 +0000 UTC m=+5076.476264816" observedRunningTime="2026-01-31 09:00:45.967853347 +0000 UTC m=+5077.821739716" watchObservedRunningTime="2026-01-31 09:00:45.984032445 +0000 UTC m=+5077.837918824" Jan 31 09:00:49 crc kubenswrapper[4826]: I0131 09:00:49.905358 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:49 crc kubenswrapper[4826]: I0131 09:00:49.905880 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:49 crc kubenswrapper[4826]: I0131 09:00:49.983779 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:50 crc kubenswrapper[4826]: I0131 09:00:50.041824 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:50 crc kubenswrapper[4826]: I0131 09:00:50.225169 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tt9xr"] Jan 31 09:00:51 crc kubenswrapper[4826]: I0131 09:00:51.998078 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tt9xr" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="registry-server" containerID="cri-o://823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4" gracePeriod=2 Jan 31 09:00:52 crc kubenswrapper[4826]: I0131 09:00:52.874362 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:52 crc kubenswrapper[4826]: I0131 09:00:52.930172 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-catalog-content\") pod \"4bfa9401-2721-452d-a787-123f39cd4eb3\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " Jan 31 09:00:52 crc kubenswrapper[4826]: I0131 09:00:52.930245 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48tb\" (UniqueName: \"kubernetes.io/projected/4bfa9401-2721-452d-a787-123f39cd4eb3-kube-api-access-s48tb\") pod \"4bfa9401-2721-452d-a787-123f39cd4eb3\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " Jan 31 09:00:52 crc kubenswrapper[4826]: I0131 09:00:52.930344 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-utilities\") pod \"4bfa9401-2721-452d-a787-123f39cd4eb3\" (UID: \"4bfa9401-2721-452d-a787-123f39cd4eb3\") " Jan 31 09:00:52 crc kubenswrapper[4826]: I0131 09:00:52.931519 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-utilities" (OuterVolumeSpecName: "utilities") pod "4bfa9401-2721-452d-a787-123f39cd4eb3" (UID: "4bfa9401-2721-452d-a787-123f39cd4eb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:00:52 crc kubenswrapper[4826]: I0131 09:00:52.937734 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfa9401-2721-452d-a787-123f39cd4eb3-kube-api-access-s48tb" (OuterVolumeSpecName: "kube-api-access-s48tb") pod "4bfa9401-2721-452d-a787-123f39cd4eb3" (UID: "4bfa9401-2721-452d-a787-123f39cd4eb3"). InnerVolumeSpecName "kube-api-access-s48tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.010275 4826 generic.go:334] "Generic (PLEG): container finished" podID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerID="823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4" exitCode=0 Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.010349 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xr" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.010347 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerDied","Data":"823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4"} Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.010497 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xr" event={"ID":"4bfa9401-2721-452d-a787-123f39cd4eb3","Type":"ContainerDied","Data":"040a2cd5349ef94f0ca8216f36b87b68c511f217cf473de97f884a0836353796"} Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.010521 4826 scope.go:117] "RemoveContainer" containerID="823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.015504 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bfa9401-2721-452d-a787-123f39cd4eb3" (UID: "4bfa9401-2721-452d-a787-123f39cd4eb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.033068 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.033103 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bfa9401-2721-452d-a787-123f39cd4eb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.033115 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48tb\" (UniqueName: \"kubernetes.io/projected/4bfa9401-2721-452d-a787-123f39cd4eb3-kube-api-access-s48tb\") on node \"crc\" DevicePath \"\"" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.033981 4826 scope.go:117] "RemoveContainer" containerID="f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.068013 4826 scope.go:117] "RemoveContainer" containerID="21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.110754 4826 scope.go:117] "RemoveContainer" containerID="823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4" Jan 31 09:00:53 crc kubenswrapper[4826]: E0131 09:00:53.111397 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4\": container with ID starting with 823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4 not found: ID does not exist" containerID="823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.111442 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4"} err="failed to get container status \"823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4\": rpc error: code = NotFound desc = could not find container \"823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4\": container with ID starting with 823be41551ec5fd173574b40fce231289acacab63c92f15f078bc38e02770ac4 not found: ID does not exist" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.111472 4826 scope.go:117] "RemoveContainer" containerID="f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1" Jan 31 09:00:53 crc kubenswrapper[4826]: E0131 09:00:53.111843 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1\": container with ID starting with f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1 not found: ID does not exist" containerID="f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.111912 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1"} err="failed to get container status \"f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1\": rpc error: code = NotFound desc = could not find container \"f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1\": container with ID starting with f05db5fffd94982443a0a7b2ade86c6412892eeb6ce05ae1750c5af3614640b1 not found: ID does not exist" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.111951 4826 scope.go:117] "RemoveContainer" containerID="21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc" Jan 31 09:00:53 crc kubenswrapper[4826]: E0131 09:00:53.112393 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc\": container with ID starting with 21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc not found: ID does not exist" containerID="21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.112430 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc"} err="failed to get container status \"21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc\": rpc error: code = NotFound desc = could not find container \"21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc\": container with ID starting with 21c99c1a91b8ed9e99bae71f5a744458a11fdd3a646121637b14267738799fcc not found: ID does not exist" Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.351747 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tt9xr"] Jan 31 09:00:53 crc kubenswrapper[4826]: I0131 09:00:53.363367 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tt9xr"] Jan 31 09:00:54 crc kubenswrapper[4826]: I0131 09:00:54.836537 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" path="/var/lib/kubelet/pods/4bfa9401-2721-452d-a787-123f39cd4eb3/volumes" Jan 31 09:00:57 crc kubenswrapper[4826]: I0131 09:00:57.377562 4826 patch_prober.go:28] interesting pod/machine-config-daemon-8v6ng container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:00:57 crc kubenswrapper[4826]: I0131 09:00:57.378232 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:00:57 crc kubenswrapper[4826]: I0131 09:00:57.378305 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" Jan 31 09:00:57 crc kubenswrapper[4826]: I0131 09:00:57.379470 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7"} pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:00:57 crc kubenswrapper[4826]: I0131 09:00:57.379560 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerName="machine-config-daemon" containerID="cri-o://71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7" gracePeriod=600 Jan 31 09:00:57 crc kubenswrapper[4826]: E0131 09:00:57.507024 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 09:00:58 crc kubenswrapper[4826]: I0131 09:00:58.073784 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" containerID="71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7" exitCode=0 Jan 31 09:00:58 crc kubenswrapper[4826]: I0131 09:00:58.073865 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" event={"ID":"ed10f53b-565a-4d14-a1d8-feabc15f08ea","Type":"ContainerDied","Data":"71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7"} Jan 31 09:00:58 crc kubenswrapper[4826]: I0131 09:00:58.074215 4826 scope.go:117] "RemoveContainer" containerID="298fd71e1f50da2903fe3c35e25132b8199d5467001dda823c983b7ade8544ac" Jan 31 09:00:58 crc kubenswrapper[4826]: I0131 09:00:58.074881 4826 scope.go:117] "RemoveContainer" containerID="71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7" Jan 31 09:00:58 crc kubenswrapper[4826]: E0131 09:00:58.075352 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.166510 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497501-gsvgb"] Jan 31 09:01:00 crc kubenswrapper[4826]: E0131 09:01:00.167445 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="extract-utilities" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.167473 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="extract-utilities" Jan 31 09:01:00 crc kubenswrapper[4826]: E0131 09:01:00.167500 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="extract-content" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.167508 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="extract-content" Jan 31 09:01:00 crc kubenswrapper[4826]: E0131 09:01:00.167538 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="registry-server" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.167547 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="registry-server" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.169407 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfa9401-2721-452d-a787-123f39cd4eb3" containerName="registry-server" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.170163 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.181837 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497501-gsvgb"] Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.195248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-fernet-keys\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.195321 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-combined-ca-bundle\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.195393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-config-data\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.195462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgnq\" (UniqueName: \"kubernetes.io/projected/08b4ab6e-2fdf-4871-b484-082146559e2c-kube-api-access-bsgnq\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.297212 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-fernet-keys\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.297305 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-combined-ca-bundle\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.297398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-config-data\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.297492 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgnq\" (UniqueName: \"kubernetes.io/projected/08b4ab6e-2fdf-4871-b484-082146559e2c-kube-api-access-bsgnq\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.304812 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-fernet-keys\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.306603 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-config-data\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.307144 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-combined-ca-bundle\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.315536 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgnq\" (UniqueName: \"kubernetes.io/projected/08b4ab6e-2fdf-4871-b484-082146559e2c-kube-api-access-bsgnq\") pod \"keystone-cron-29497501-gsvgb\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:00 crc kubenswrapper[4826]: I0131 09:01:00.510323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:01 crc kubenswrapper[4826]: I0131 09:01:01.009324 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497501-gsvgb"] Jan 31 09:01:01 crc kubenswrapper[4826]: I0131 09:01:01.172813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497501-gsvgb" event={"ID":"08b4ab6e-2fdf-4871-b484-082146559e2c","Type":"ContainerStarted","Data":"a501784779ffdd44152f53eb1f93451e3106a9c21c15d6d1a060149e80869c90"} Jan 31 09:01:02 crc kubenswrapper[4826]: I0131 09:01:02.190404 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497501-gsvgb" event={"ID":"08b4ab6e-2fdf-4871-b484-082146559e2c","Type":"ContainerStarted","Data":"8abe6a9daefcc6dad8d133d10b20a281d8fdaab64d280185f2cbc3b824818454"} Jan 31 09:01:02 crc kubenswrapper[4826]: I0131 09:01:02.222469 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497501-gsvgb" podStartSLOduration=2.222444617 podStartE2EDuration="2.222444617s" podCreationTimestamp="2026-01-31 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:01:02.222312693 +0000 UTC m=+5094.076199082" watchObservedRunningTime="2026-01-31 09:01:02.222444617 +0000 UTC m=+5094.076331006" Jan 31 09:01:04 crc kubenswrapper[4826]: I0131 09:01:04.222209 4826 generic.go:334] "Generic (PLEG): container finished" podID="08b4ab6e-2fdf-4871-b484-082146559e2c" containerID="8abe6a9daefcc6dad8d133d10b20a281d8fdaab64d280185f2cbc3b824818454" exitCode=0 Jan 31 09:01:04 crc kubenswrapper[4826]: I0131 09:01:04.222341 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497501-gsvgb" event={"ID":"08b4ab6e-2fdf-4871-b484-082146559e2c","Type":"ContainerDied","Data":"8abe6a9daefcc6dad8d133d10b20a281d8fdaab64d280185f2cbc3b824818454"} Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.626106 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.825746 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-fernet-keys\") pod \"08b4ab6e-2fdf-4871-b484-082146559e2c\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.826096 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-config-data\") pod \"08b4ab6e-2fdf-4871-b484-082146559e2c\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.826200 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-combined-ca-bundle\") pod \"08b4ab6e-2fdf-4871-b484-082146559e2c\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.826236 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgnq\" (UniqueName: \"kubernetes.io/projected/08b4ab6e-2fdf-4871-b484-082146559e2c-kube-api-access-bsgnq\") pod \"08b4ab6e-2fdf-4871-b484-082146559e2c\" (UID: \"08b4ab6e-2fdf-4871-b484-082146559e2c\") " Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.832939 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b4ab6e-2fdf-4871-b484-082146559e2c-kube-api-access-bsgnq" (OuterVolumeSpecName: "kube-api-access-bsgnq") pod "08b4ab6e-2fdf-4871-b484-082146559e2c" (UID: "08b4ab6e-2fdf-4871-b484-082146559e2c"). InnerVolumeSpecName "kube-api-access-bsgnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.840114 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "08b4ab6e-2fdf-4871-b484-082146559e2c" (UID: "08b4ab6e-2fdf-4871-b484-082146559e2c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.881610 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08b4ab6e-2fdf-4871-b484-082146559e2c" (UID: "08b4ab6e-2fdf-4871-b484-082146559e2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.920994 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-config-data" (OuterVolumeSpecName: "config-data") pod "08b4ab6e-2fdf-4871-b484-082146559e2c" (UID: "08b4ab6e-2fdf-4871-b484-082146559e2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.929007 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.929045 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.929057 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b4ab6e-2fdf-4871-b484-082146559e2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:05 crc kubenswrapper[4826]: I0131 09:01:05.929071 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsgnq\" (UniqueName: \"kubernetes.io/projected/08b4ab6e-2fdf-4871-b484-082146559e2c-kube-api-access-bsgnq\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:06 crc kubenswrapper[4826]: I0131 09:01:06.245850 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497501-gsvgb" event={"ID":"08b4ab6e-2fdf-4871-b484-082146559e2c","Type":"ContainerDied","Data":"a501784779ffdd44152f53eb1f93451e3106a9c21c15d6d1a060149e80869c90"} Jan 31 09:01:06 crc kubenswrapper[4826]: I0131 09:01:06.246759 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a501784779ffdd44152f53eb1f93451e3106a9c21c15d6d1a060149e80869c90" Jan 31 09:01:06 crc kubenswrapper[4826]: I0131 09:01:06.245945 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497501-gsvgb" Jan 31 09:01:10 crc kubenswrapper[4826]: I0131 09:01:10.811076 4826 scope.go:117] "RemoveContainer" containerID="71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7" Jan 31 09:01:10 crc kubenswrapper[4826]: E0131 09:01:10.821349 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea" Jan 31 09:01:21 crc kubenswrapper[4826]: I0131 09:01:21.809590 4826 scope.go:117] "RemoveContainer" containerID="71ae50ddafdb6dee301644dfaf816c90c99d9a6d52b50fc28dd2f036b7b357c7" Jan 31 09:01:21 crc kubenswrapper[4826]: E0131 09:01:21.810447 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8v6ng_openshift-machine-config-operator(ed10f53b-565a-4d14-a1d8-feabc15f08ea)\"" pod="openshift-machine-config-operator/machine-config-daemon-8v6ng" podUID="ed10f53b-565a-4d14-a1d8-feabc15f08ea"